Test Report: Docker_Linux_containerd 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (7/278)

Order failed test Duration
198 TestMultiNode/serial/RestartKeepsNodes 180.03
199 TestMultiNode/serial/DeleteNode 6.68
207 TestPreload 371.58
215 TestKubernetesUpgrade 580.42
316 TestNetworkPlugins/group/calico/Start 516.2
333 TestNetworkPlugins/group/bridge/DNS 339.61
336 TestNetworkPlugins/group/enable-default-cni/DNS 370.74
x
+
TestMultiNode/serial/RestartKeepsNodes (180.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-102822
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-102822
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-102822: (40.992899975s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102822 --wait=true -v=8 --alsologtostderr
E0114 10:32:53.110310   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:33:20.795528   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:33:33.933900   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-102822 --wait=true -v=8 --alsologtostderr: exit status 80 (2m15.51765458s)

                                                
                                                
-- stdout --
	* [multinode-102822] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-102822 in cluster multinode-102822
	* Pulling base image ...
	* Restarting existing docker container for "multinode-102822" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-102822-m02 in cluster multinode-102822
	* Pulling base image ...
	* Restarting existing docker container for "multinode-102822-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - env NO_PROXY=192.168.58.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:31:56.244080  110500 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:31:56.244253  110500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:31:56.244261  110500 out.go:309] Setting ErrFile to fd 2...
	I0114 10:31:56.244266  110500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:31:56.244366  110500 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:31:56.244882  110500 out.go:303] Setting JSON to false
	I0114 10:31:56.246188  110500 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4464,"bootTime":1673687853,"procs":640,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:31:56.246254  110500 start.go:135] virtualization: kvm guest
	I0114 10:31:56.248786  110500 out.go:177] * [multinode-102822] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:31:56.250375  110500 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:31:56.250301  110500 notify.go:220] Checking for updates...
	I0114 10:31:56.253580  110500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:31:56.255205  110500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:31:56.256807  110500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:31:56.258293  110500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:31:56.260196  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:31:56.260244  110500 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:31:56.288513  110500 docker.go:138] docker version: linux-20.10.22
	I0114 10:31:56.288613  110500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:31:56.380775  110500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:31:56.306666417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:31:56.380877  110500 docker.go:255] overlay module found
	I0114 10:31:56.383058  110500 out.go:177] * Using the docker driver based on existing profile
	I0114 10:31:56.384332  110500 start.go:294] selected driver: docker
	I0114 10:31:56.384350  110500 start.go:838] validating driver "docker" against &{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
}
	I0114 10:31:56.384462  110500 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:31:56.384525  110500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:31:56.478549  110500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:31:56.403841818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:31:56.479153  110500 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:31:56.479180  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:31:56.479187  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:31:56.479205  110500 start_flags.go:319] config:
	{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-ins
taller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:31:56.482752  110500 out.go:177] * Starting control plane node multinode-102822 in cluster multinode-102822
	I0114 10:31:56.484264  110500 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:31:56.485726  110500 out.go:177] * Pulling base image ...
	I0114 10:31:56.487160  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:31:56.487205  110500 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:31:56.487226  110500 cache.go:57] Caching tarball of preloaded images
	I0114 10:31:56.487203  110500 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:31:56.487522  110500 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:31:56.487542  110500 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:31:56.487744  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:31:56.509755  110500 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:31:56.509787  110500 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:31:56.509802  110500 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:31:56.509837  110500 start.go:364] acquiring machines lock for multinode-102822: {Name:mkd70e1f2f35b7e6f7c31ed25602b988985e4fa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:31:56.509932  110500 start.go:368] acquired machines lock for "multinode-102822" in 68.904µs
	I0114 10:31:56.509951  110500 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:31:56.509955  110500 fix.go:55] fixHost starting: 
	I0114 10:31:56.510146  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:31:56.531979  110500 fix.go:103] recreateIfNeeded on multinode-102822: state=Stopped err=<nil>
	W0114 10:31:56.532013  110500 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:31:56.535180  110500 out.go:177] * Restarting existing docker container for "multinode-102822" ...
	I0114 10:31:56.536670  110500 cli_runner.go:164] Run: docker start multinode-102822
	I0114 10:31:56.910511  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:31:56.935016  110500 kic.go:426] container "multinode-102822" state is running.
	I0114 10:31:56.935341  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:31:56.958657  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:31:56.958868  110500 machine.go:88] provisioning docker machine ...
	I0114 10:31:56.958889  110500 ubuntu.go:169] provisioning hostname "multinode-102822"
	I0114 10:31:56.958926  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:31:56.981260  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:31:56.981492  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0114 10:31:56.981520  110500 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102822 && echo "multinode-102822" | sudo tee /etc/hostname
	I0114 10:31:56.982146  110500 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38962->127.0.0.1:32867: read: connection reset by peer
	I0114 10:32:00.107919  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102822
	
	I0114 10:32:00.107984  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.131658  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:00.131837  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0114 10:32:00.131856  110500 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102822/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:32:00.247376  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:32:00.247412  110500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:32:00.247433  110500 ubuntu.go:177] setting up certificates
	I0114 10:32:00.247441  110500 provision.go:83] configureAuth start
	I0114 10:32:00.247481  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:32:00.270071  110500 provision.go:138] copyHostCerts
	I0114 10:32:00.270112  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:00.270162  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:32:00.270173  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:00.270248  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:32:00.270337  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:00.270358  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:32:00.270365  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:00.270400  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:32:00.270455  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:00.270478  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:32:00.270487  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:00.270524  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:32:00.270583  110500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.multinode-102822 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-102822]
	I0114 10:32:00.494150  110500 provision.go:172] copyRemoteCerts
	I0114 10:32:00.494232  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:32:00.494276  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.517022  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.602641  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 10:32:00.602710  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:32:00.619533  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 10:32:00.619601  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0114 10:32:00.635920  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 10:32:00.635984  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 10:32:00.652526  110500 provision.go:86] duration metric: configureAuth took 405.072699ms
	I0114 10:32:00.652560  110500 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:32:00.652742  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:00.652754  110500 machine.go:91] provisioned docker machine in 3.693874899s
	I0114 10:32:00.652761  110500 start.go:300] post-start starting for "multinode-102822" (driver="docker")
	I0114 10:32:00.652767  110500 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:32:00.652803  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:32:00.652841  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.676636  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.758928  110500 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:32:00.761499  110500 command_runner.go:130] > NAME="Ubuntu"
	I0114 10:32:00.761517  110500 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 10:32:00.761524  110500 command_runner.go:130] > ID=ubuntu
	I0114 10:32:00.761532  110500 command_runner.go:130] > ID_LIKE=debian
	I0114 10:32:00.761540  110500 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 10:32:00.761548  110500 command_runner.go:130] > VERSION_ID="20.04"
	I0114 10:32:00.761559  110500 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 10:32:00.761567  110500 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 10:32:00.761572  110500 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 10:32:00.761584  110500 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 10:32:00.761591  110500 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 10:32:00.761595  110500 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 10:32:00.761748  110500 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:32:00.761772  110500 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:32:00.761786  110500 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:32:00.761796  110500 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:32:00.761810  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:32:00.761869  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:32:00.761948  110500 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:32:00.761962  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /etc/ssl/certs/103062.pem
	I0114 10:32:00.762051  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:32:00.768638  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:32:00.785666  110500 start.go:303] post-start completed in 132.893086ms
	I0114 10:32:00.785739  110500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:32:00.785780  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.808883  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.892008  110500 command_runner.go:130] > 18%
	I0114 10:32:00.892093  110500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:32:00.895774  110500 command_runner.go:130] > 239G
	I0114 10:32:00.895937  110500 fix.go:57] fixHost completed within 4.385975679s
	I0114 10:32:00.895960  110500 start.go:83] releasing machines lock for "multinode-102822", held for 4.386015126s
	I0114 10:32:00.896044  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:32:00.919896  110500 ssh_runner.go:195] Run: cat /version.json
	I0114 10:32:00.919947  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.919973  110500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:32:00.920028  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.942987  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.946487  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:01.054033  110500 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 10:32:01.054097  110500 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0114 10:32:01.054188  110500 ssh_runner.go:195] Run: systemctl --version
	I0114 10:32:01.057819  110500 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0114 10:32:01.057844  110500 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0114 10:32:01.058053  110500 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:32:01.068862  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:32:01.077997  110500 docker.go:189] disabling docker service ...
	I0114 10:32:01.078119  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:32:01.087867  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:32:01.096584  110500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:32:01.179660  110500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:32:01.257778  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:32:01.266818  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:32:01.278503  110500 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:32:01.278530  110500 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:32:01.279238  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:32:01.286923  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:32:01.294475  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:32:01.302050  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:32:01.309511  110500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:32:01.314863  110500 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0114 10:32:01.315392  110500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:32:01.321309  110500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:32:01.393049  110500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:32:01.455546  110500 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:32:01.455627  110500 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:32:01.458967  110500 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0114 10:32:01.458992  110500 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 10:32:01.458999  110500 command_runner.go:130] > Device: 3fh/63d	Inode: 109         Links: 1
	I0114 10:32:01.459006  110500 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:32:01.459012  110500 command_runner.go:130] > Access: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459016  110500 command_runner.go:130] > Modify: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459023  110500 command_runner.go:130] > Change: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459028  110500 command_runner.go:130] >  Birth: -
	I0114 10:32:01.459049  110500 start.go:472] Will wait 60s for crictl version
	I0114 10:32:01.459115  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:32:01.462116  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:32:01.462198  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:32:01.488685  110500 command_runner.go:130] ! time="2023-01-14T10:32:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:32:01.488775  110500 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:32:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:32:12.536033  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:32:12.557499  110500 command_runner.go:130] > Version:  0.1.0
	I0114 10:32:12.557525  110500 command_runner.go:130] > RuntimeName:  containerd
	I0114 10:32:12.557533  110500 command_runner.go:130] > RuntimeVersion:  1.6.10
	I0114 10:32:12.557540  110500 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0114 10:32:12.559041  110500 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:32:12.559089  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:32:12.580521  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:32:12.581939  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:32:12.602970  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:32:12.607003  110500 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:32:12.608552  110500 cli_runner.go:164] Run: docker network inspect multinode-102822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:32:12.630384  110500 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0114 10:32:12.633652  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:32:12.642818  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:32:12.642867  110500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:32:12.665261  110500 command_runner.go:130] > {
	I0114 10:32:12.665286  110500 command_runner.go:130] >   "images": [
	I0114 10:32:12.665292  110500 command_runner.go:130] >     {
	I0114 10:32:12.665303  110500 command_runner.go:130] >       "id": "sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f",
	I0114 10:32:12.665311  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665320  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20221004-44d545d1"
	I0114 10:32:12.665326  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665335  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665344  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"
	I0114 10:32:12.665354  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665359  110500 command_runner.go:130] >       "size": "25830582",
	I0114 10:32:12.665363  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665369  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665374  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665383  110500 command_runner.go:130] >     },
	I0114 10:32:12.665390  110500 command_runner.go:130] >     {
	I0114 10:32:12.665397  110500 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0114 10:32:12.665403  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665409  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0114 10:32:12.665415  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665419  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665426  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0114 10:32:12.665432  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665437  110500 command_runner.go:130] >       "size": "725911",
	I0114 10:32:12.665443  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665448  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665454  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665458  110500 command_runner.go:130] >     },
	I0114 10:32:12.665464  110500 command_runner.go:130] >     {
	I0114 10:32:12.665470  110500 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0114 10:32:12.665477  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665482  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:32:12.665488  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665496  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665509  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0114 10:32:12.665515  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665519  110500 command_runner.go:130] >       "size": "9058936",
	I0114 10:32:12.665526  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665530  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665537  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665540  110500 command_runner.go:130] >     },
	I0114 10:32:12.665546  110500 command_runner.go:130] >     {
	I0114 10:32:12.665553  110500 command_runner.go:130] >       "id": "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a",
	I0114 10:32:12.665561  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665570  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:32:12.665576  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665581  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665590  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"
	I0114 10:32:12.665596  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665600  110500 command_runner.go:130] >       "size": "14837849",
	I0114 10:32:12.665607  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665611  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665617  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665621  110500 command_runner.go:130] >     },
	I0114 10:32:12.665627  110500 command_runner.go:130] >     {
	I0114 10:32:12.665634  110500 command_runner.go:130] >       "id": "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66",
	I0114 10:32:12.665650  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665658  110500 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.4-0"
	I0114 10:32:12.665662  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665668  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665675  110500 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"
	I0114 10:32:12.665681  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665686  110500 command_runner.go:130] >       "size": "102157811",
	I0114 10:32:12.665692  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665696  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665702  110500 command_runner.go:130] >       },
	I0114 10:32:12.665706  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665716  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665724  110500 command_runner.go:130] >     },
	I0114 10:32:12.665731  110500 command_runner.go:130] >     {
	I0114 10:32:12.665737  110500 command_runner.go:130] >       "id": "sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0",
	I0114 10:32:12.665744  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665750  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:32:12.665754  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665760  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665770  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"
	I0114 10:32:12.665776  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665780  110500 command_runner.go:130] >       "size": "34238163",
	I0114 10:32:12.665786  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665790  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665796  110500 command_runner.go:130] >       },
	I0114 10:32:12.665801  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665807  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665810  110500 command_runner.go:130] >     },
	I0114 10:32:12.665816  110500 command_runner.go:130] >     {
	I0114 10:32:12.665823  110500 command_runner.go:130] >       "id": "sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a",
	I0114 10:32:12.665830  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665835  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:32:12.665842  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665846  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665856  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"
	I0114 10:32:12.665862  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665867  110500 command_runner.go:130] >       "size": "31261869",
	I0114 10:32:12.665873  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665877  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665887  110500 command_runner.go:130] >       },
	I0114 10:32:12.665891  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665899  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665906  110500 command_runner.go:130] >     },
	I0114 10:32:12.665910  110500 command_runner.go:130] >     {
	I0114 10:32:12.665916  110500 command_runner.go:130] >       "id": "sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041",
	I0114 10:32:12.665920  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665927  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:32:12.665931  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665937  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665945  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"
	I0114 10:32:12.665951  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665956  110500 command_runner.go:130] >       "size": "20265805",
	I0114 10:32:12.665962  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665966  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665973  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665978  110500 command_runner.go:130] >     },
	I0114 10:32:12.665984  110500 command_runner.go:130] >     {
	I0114 10:32:12.665990  110500 command_runner.go:130] >       "id": "sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912",
	I0114 10:32:12.665997  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.666002  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:32:12.666008  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666012  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.666022  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"
	I0114 10:32:12.666028  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666032  110500 command_runner.go:130] >       "size": "15798744",
	I0114 10:32:12.666038  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.666042  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.666048  110500 command_runner.go:130] >       },
	I0114 10:32:12.666052  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.666059  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.666063  110500 command_runner.go:130] >     },
	I0114 10:32:12.666069  110500 command_runner.go:130] >     {
	I0114 10:32:12.666075  110500 command_runner.go:130] >       "id": "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517",
	I0114 10:32:12.666083  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.666088  110500 command_runner.go:130] >         "registry.k8s.io/pause:3.8"
	I0114 10:32:12.666094  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666099  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.666108  110500 command_runner.go:130] >         "registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"
	I0114 10:32:12.666114  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666127  110500 command_runner.go:130] >       "size": "311286",
	I0114 10:32:12.666133  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.666138  110500 command_runner.go:130] >         "value": "65535"
	I0114 10:32:12.666143  110500 command_runner.go:130] >       },
	I0114 10:32:12.666148  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.666154  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.666158  110500 command_runner.go:130] >     }
	I0114 10:32:12.666163  110500 command_runner.go:130] >   ]
	I0114 10:32:12.666166  110500 command_runner.go:130] > }
	I0114 10:32:12.666314  110500 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:32:12.666326  110500 containerd.go:467] Images already preloaded, skipping extraction
	I0114 10:32:12.666364  110500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:32:12.687374  110500 command_runner.go:130] > {
	I0114 10:32:12.687399  110500 command_runner.go:130] >   "images": [
	I0114 10:32:12.687405  110500 command_runner.go:130] >     {
	I0114 10:32:12.687415  110500 command_runner.go:130] >       "id": "sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f",
	I0114 10:32:12.687421  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687428  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20221004-44d545d1"
	I0114 10:32:12.687433  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687440  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687460  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"
	I0114 10:32:12.687475  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687483  110500 command_runner.go:130] >       "size": "25830582",
	I0114 10:32:12.687490  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687496  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687503  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687509  110500 command_runner.go:130] >     },
	I0114 10:32:12.687514  110500 command_runner.go:130] >     {
	I0114 10:32:12.687523  110500 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0114 10:32:12.687533  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687545  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0114 10:32:12.687554  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687564  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687580  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0114 10:32:12.687590  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687600  110500 command_runner.go:130] >       "size": "725911",
	I0114 10:32:12.687609  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687617  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687623  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687632  110500 command_runner.go:130] >     },
	I0114 10:32:12.687644  110500 command_runner.go:130] >     {
	I0114 10:32:12.687658  110500 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0114 10:32:12.687668  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687695  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:32:12.687702  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687713  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687734  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0114 10:32:12.687743  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687753  110500 command_runner.go:130] >       "size": "9058936",
	I0114 10:32:12.687763  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687771  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687781  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687797  110500 command_runner.go:130] >     },
	I0114 10:32:12.687804  110500 command_runner.go:130] >     {
	I0114 10:32:12.687818  110500 command_runner.go:130] >       "id": "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a",
	I0114 10:32:12.687828  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687839  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:32:12.687848  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687858  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687869  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"
	I0114 10:32:12.687877  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687884  110500 command_runner.go:130] >       "size": "14837849",
	I0114 10:32:12.687895  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687903  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687913  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687922  110500 command_runner.go:130] >     },
	I0114 10:32:12.687930  110500 command_runner.go:130] >     {
	I0114 10:32:12.687946  110500 command_runner.go:130] >       "id": "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66",
	I0114 10:32:12.687956  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687965  110500 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.4-0"
	I0114 10:32:12.687969  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687976  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687991  110500 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"
	I0114 10:32:12.688000  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688008  110500 command_runner.go:130] >       "size": "102157811",
	I0114 10:32:12.688018  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688027  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688037  110500 command_runner.go:130] >       },
	I0114 10:32:12.688046  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688060  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688069  110500 command_runner.go:130] >     },
	I0114 10:32:12.688075  110500 command_runner.go:130] >     {
	I0114 10:32:12.688086  110500 command_runner.go:130] >       "id": "sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0",
	I0114 10:32:12.688096  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688106  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:32:12.688116  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688126  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688141  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"
	I0114 10:32:12.688151  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688161  110500 command_runner.go:130] >       "size": "34238163",
	I0114 10:32:12.688168  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688174  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688181  110500 command_runner.go:130] >       },
	I0114 10:32:12.688192  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688199  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688208  110500 command_runner.go:130] >     },
	I0114 10:32:12.688217  110500 command_runner.go:130] >     {
	I0114 10:32:12.688228  110500 command_runner.go:130] >       "id": "sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a",
	I0114 10:32:12.688238  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688250  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:32:12.688258  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688266  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688278  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"
	I0114 10:32:12.688287  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688295  110500 command_runner.go:130] >       "size": "31261869",
	I0114 10:32:12.688304  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688314  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688323  110500 command_runner.go:130] >       },
	I0114 10:32:12.688333  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688342  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688352  110500 command_runner.go:130] >     },
	I0114 10:32:12.688361  110500 command_runner.go:130] >     {
	I0114 10:32:12.688374  110500 command_runner.go:130] >       "id": "sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041",
	I0114 10:32:12.688387  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688396  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:32:12.688402  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688408  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688418  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"
	I0114 10:32:12.688431  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688439  110500 command_runner.go:130] >       "size": "20265805",
	I0114 10:32:12.688447  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.688455  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688465  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688471  110500 command_runner.go:130] >     },
	I0114 10:32:12.688484  110500 command_runner.go:130] >     {
	I0114 10:32:12.688495  110500 command_runner.go:130] >       "id": "sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912",
	I0114 10:32:12.688502  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688511  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:32:12.688520  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688527  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688542  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"
	I0114 10:32:12.688551  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688558  110500 command_runner.go:130] >       "size": "15798744",
	I0114 10:32:12.688566  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688571  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688580  110500 command_runner.go:130] >       },
	I0114 10:32:12.688587  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688597  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688603  110500 command_runner.go:130] >     },
	I0114 10:32:12.688613  110500 command_runner.go:130] >     {
	I0114 10:32:12.688626  110500 command_runner.go:130] >       "id": "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517",
	I0114 10:32:12.688636  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688644  110500 command_runner.go:130] >         "registry.k8s.io/pause:3.8"
	I0114 10:32:12.688653  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688661  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688677  110500 command_runner.go:130] >         "registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"
	I0114 10:32:12.688684  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688742  110500 command_runner.go:130] >       "size": "311286",
	I0114 10:32:12.688753  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688761  110500 command_runner.go:130] >         "value": "65535"
	I0114 10:32:12.688767  110500 command_runner.go:130] >       },
	I0114 10:32:12.688775  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688790  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688801  110500 command_runner.go:130] >     }
	I0114 10:32:12.688808  110500 command_runner.go:130] >   ]
	I0114 10:32:12.688813  110500 command_runner.go:130] > }
	I0114 10:32:12.689381  110500 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:32:12.689398  110500 cache_images.go:84] Images are preloaded, skipping loading
	I0114 10:32:12.689437  110500 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:32:12.710587  110500 command_runner.go:130] > {
	I0114 10:32:12.710608  110500 command_runner.go:130] >   "status": {
	I0114 10:32:12.710615  110500 command_runner.go:130] >     "conditions": [
	I0114 10:32:12.710621  110500 command_runner.go:130] >       {
	I0114 10:32:12.710628  110500 command_runner.go:130] >         "type": "RuntimeReady",
	I0114 10:32:12.710634  110500 command_runner.go:130] >         "status": true,
	I0114 10:32:12.710640  110500 command_runner.go:130] >         "reason": "",
	I0114 10:32:12.710646  110500 command_runner.go:130] >         "message": ""
	I0114 10:32:12.710651  110500 command_runner.go:130] >       },
	I0114 10:32:12.710657  110500 command_runner.go:130] >       {
	I0114 10:32:12.710668  110500 command_runner.go:130] >         "type": "NetworkReady",
	I0114 10:32:12.710677  110500 command_runner.go:130] >         "status": true,
	I0114 10:32:12.710687  110500 command_runner.go:130] >         "reason": "",
	I0114 10:32:12.710696  110500 command_runner.go:130] >         "message": ""
	I0114 10:32:12.710705  110500 command_runner.go:130] >       }
	I0114 10:32:12.710713  110500 command_runner.go:130] >     ]
	I0114 10:32:12.710720  110500 command_runner.go:130] >   },
	I0114 10:32:12.710728  110500 command_runner.go:130] >   "cniconfig": {
	I0114 10:32:12.710738  110500 command_runner.go:130] >     "PluginDirs": [
	I0114 10:32:12.710749  110500 command_runner.go:130] >       "/opt/cni/bin"
	I0114 10:32:12.710758  110500 command_runner.go:130] >     ],
	I0114 10:32:12.710773  110500 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.mk",
	I0114 10:32:12.710784  110500 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0114 10:32:12.710792  110500 command_runner.go:130] >     "Prefix": "eth",
	I0114 10:32:12.710802  110500 command_runner.go:130] >     "Networks": [
	I0114 10:32:12.710812  110500 command_runner.go:130] >       {
	I0114 10:32:12.710820  110500 command_runner.go:130] >         "Config": {
	I0114 10:32:12.710835  110500 command_runner.go:130] >           "Name": "cni-loopback",
	I0114 10:32:12.710847  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:32:12.710856  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:32:12.710866  110500 command_runner.go:130] >             {
	I0114 10:32:12.710875  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.710886  110500 command_runner.go:130] >                 "type": "loopback",
	I0114 10:32:12.710896  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:32:12.710902  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.710907  110500 command_runner.go:130] >               },
	I0114 10:32:12.710917  110500 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0114 10:32:12.710927  110500 command_runner.go:130] >             }
	I0114 10:32:12.710936  110500 command_runner.go:130] >           ],
	I0114 10:32:12.710949  110500 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0114 10:32:12.710956  110500 command_runner.go:130] >         },
	I0114 10:32:12.710967  110500 command_runner.go:130] >         "IFName": "lo"
	I0114 10:32:12.710977  110500 command_runner.go:130] >       },
	I0114 10:32:12.710986  110500 command_runner.go:130] >       {
	I0114 10:32:12.710994  110500 command_runner.go:130] >         "Config": {
	I0114 10:32:12.711008  110500 command_runner.go:130] >           "Name": "kindnet",
	I0114 10:32:12.711018  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:32:12.711025  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:32:12.711035  110500 command_runner.go:130] >             {
	I0114 10:32:12.711044  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.711055  110500 command_runner.go:130] >                 "type": "ptp",
	I0114 10:32:12.711066  110500 command_runner.go:130] >                 "ipam": {
	I0114 10:32:12.711078  110500 command_runner.go:130] >                   "type": "host-local"
	I0114 10:32:12.711088  110500 command_runner.go:130] >                 },
	I0114 10:32:12.711096  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.711106  110500 command_runner.go:130] >               },
	I0114 10:32:12.711127  110500 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0114 10:32:12.711140  110500 command_runner.go:130] >             },
	I0114 10:32:12.711150  110500 command_runner.go:130] >             {
	I0114 10:32:12.711159  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.711172  110500 command_runner.go:130] >                 "type": "portmap",
	I0114 10:32:12.711183  110500 command_runner.go:130] >                 "capabilities": {
	I0114 10:32:12.711194  110500 command_runner.go:130] >                   "portMappings": true
	I0114 10:32:12.711201  110500 command_runner.go:130] >                 },
	I0114 10:32:12.711211  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:32:12.711223  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.711231  110500 command_runner.go:130] >               },
	I0114 10:32:12.711245  110500 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0114 10:32:12.711255  110500 command_runner.go:130] >             }
	I0114 10:32:12.711263  110500 command_runner.go:130] >           ],
	I0114 10:32:12.711307  110500 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0114 10:32:12.711317  110500 command_runner.go:130] >         },
	I0114 10:32:12.711325  110500 command_runner.go:130] >         "IFName": "eth0"
	I0114 10:32:12.711331  110500 command_runner.go:130] >       }
	I0114 10:32:12.711337  110500 command_runner.go:130] >     ]
	I0114 10:32:12.711348  110500 command_runner.go:130] >   },
	I0114 10:32:12.711358  110500 command_runner.go:130] >   "config": {
	I0114 10:32:12.711366  110500 command_runner.go:130] >     "containerd": {
	I0114 10:32:12.711377  110500 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0114 10:32:12.711388  110500 command_runner.go:130] >       "defaultRuntimeName": "default",
	I0114 10:32:12.711399  110500 command_runner.go:130] >       "defaultRuntime": {
	I0114 10:32:12.711409  110500 command_runner.go:130] >         "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711419  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:32:12.711430  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:32:12.711438  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:32:12.711449  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:32:12.711461  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:32:12.711472  110500 command_runner.go:130] >         "options": null,
	I0114 10:32:12.711484  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:32:12.711495  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:32:12.711504  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:32:12.711513  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:32:12.711522  110500 command_runner.go:130] >       },
	I0114 10:32:12.711531  110500 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0114 10:32:12.711542  110500 command_runner.go:130] >         "runtimeType": "",
	I0114 10:32:12.711553  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:32:12.711563  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:32:12.711575  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:32:12.711586  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:32:12.711597  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:32:12.711607  110500 command_runner.go:130] >         "options": null,
	I0114 10:32:12.711616  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:32:12.711627  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:32:12.711638  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:32:12.711649  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:32:12.711658  110500 command_runner.go:130] >       },
	I0114 10:32:12.711684  110500 command_runner.go:130] >       "runtimes": {
	I0114 10:32:12.711694  110500 command_runner.go:130] >         "default": {
	I0114 10:32:12.711706  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711717  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:32:12.711728  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:32:12.711739  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:32:12.711751  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:32:12.711762  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:32:12.711781  110500 command_runner.go:130] >           "options": null,
	I0114 10:32:12.711794  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:32:12.711805  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:32:12.711816  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:32:12.711826  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:32:12.711837  110500 command_runner.go:130] >         },
	I0114 10:32:12.711848  110500 command_runner.go:130] >         "runc": {
	I0114 10:32:12.711861  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711871  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:32:12.711879  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:32:12.711890  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:32:12.711902  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:32:12.711912  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:32:12.711923  110500 command_runner.go:130] >           "options": {
	I0114 10:32:12.711975  110500 command_runner.go:130] >             "SystemdCgroup": false
	I0114 10:32:12.711989  110500 command_runner.go:130] >           },
	I0114 10:32:12.711998  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:32:12.712006  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:32:12.712017  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:32:12.712028  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:32:12.712036  110500 command_runner.go:130] >         }
	I0114 10:32:12.712045  110500 command_runner.go:130] >       },
	I0114 10:32:12.712057  110500 command_runner.go:130] >       "noPivot": false,
	I0114 10:32:12.712068  110500 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0114 10:32:12.712078  110500 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0114 10:32:12.712089  110500 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0114 10:32:12.712099  110500 command_runner.go:130] >     },
	I0114 10:32:12.712107  110500 command_runner.go:130] >     "cni": {
	I0114 10:32:12.712118  110500 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0114 10:32:12.712130  110500 command_runner.go:130] >       "confDir": "/etc/cni/net.mk",
	I0114 10:32:12.712140  110500 command_runner.go:130] >       "maxConfNum": 1,
	I0114 10:32:12.712151  110500 command_runner.go:130] >       "confTemplate": "",
	I0114 10:32:12.712161  110500 command_runner.go:130] >       "ipPref": ""
	I0114 10:32:12.712170  110500 command_runner.go:130] >     },
	I0114 10:32:12.712177  110500 command_runner.go:130] >     "registry": {
	I0114 10:32:12.712189  110500 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0114 10:32:12.712199  110500 command_runner.go:130] >       "mirrors": null,
	I0114 10:32:12.712209  110500 command_runner.go:130] >       "configs": null,
	I0114 10:32:12.712220  110500 command_runner.go:130] >       "auths": null,
	I0114 10:32:12.712232  110500 command_runner.go:130] >       "headers": null
	I0114 10:32:12.712242  110500 command_runner.go:130] >     },
	I0114 10:32:12.712251  110500 command_runner.go:130] >     "imageDecryption": {
	I0114 10:32:12.712261  110500 command_runner.go:130] >       "keyModel": "node"
	I0114 10:32:12.712267  110500 command_runner.go:130] >     },
	I0114 10:32:12.712274  110500 command_runner.go:130] >     "disableTCPService": true,
	I0114 10:32:12.712281  110500 command_runner.go:130] >     "streamServerAddress": "",
	I0114 10:32:12.712292  110500 command_runner.go:130] >     "streamServerPort": "10010",
	I0114 10:32:12.712303  110500 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0114 10:32:12.712312  110500 command_runner.go:130] >     "enableSelinux": false,
	I0114 10:32:12.712324  110500 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0114 10:32:12.712337  110500 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.8",
	I0114 10:32:12.712348  110500 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0114 10:32:12.712360  110500 command_runner.go:130] >     "systemdCgroup": false,
	I0114 10:32:12.712368  110500 command_runner.go:130] >     "enableTLSStreaming": false,
	I0114 10:32:12.712379  110500 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0114 10:32:12.712388  110500 command_runner.go:130] >       "tlsCertFile": "",
	I0114 10:32:12.712398  110500 command_runner.go:130] >       "tlsKeyFile": ""
	I0114 10:32:12.712407  110500 command_runner.go:130] >     },
	I0114 10:32:12.712417  110500 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0114 10:32:12.712427  110500 command_runner.go:130] >     "disableCgroup": false,
	I0114 10:32:12.712436  110500 command_runner.go:130] >     "disableApparmor": false,
	I0114 10:32:12.712446  110500 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0114 10:32:12.712455  110500 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0114 10:32:12.712466  110500 command_runner.go:130] >     "disableProcMount": false,
	I0114 10:32:12.712477  110500 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0114 10:32:12.712487  110500 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0114 10:32:12.712496  110500 command_runner.go:130] >     "disableHugetlbController": true,
	I0114 10:32:12.712508  110500 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0114 10:32:12.712519  110500 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0114 10:32:12.712530  110500 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0114 10:32:12.712544  110500 command_runner.go:130] >     "enableUnprivilegedPorts": false,
	I0114 10:32:12.712557  110500 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0114 10:32:12.712569  110500 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0114 10:32:12.712582  110500 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0114 10:32:12.712594  110500 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0114 10:32:12.712607  110500 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0114 10:32:12.712615  110500 command_runner.go:130] >   },
	I0114 10:32:12.712623  110500 command_runner.go:130] >   "golang": "go1.18.8",
	I0114 10:32:12.712635  110500 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0114 10:32:12.712647  110500 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0114 10:32:12.712656  110500 command_runner.go:130] > }
	I0114 10:32:12.712858  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:32:12.712872  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:32:12.712887  110500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:32:12.712904  110500 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102822 NodeName:multinode-102822 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:32:12.713036  110500 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-102822"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:32:12.713135  110500 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-102822 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:32:12.713190  110500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:32:12.719488  110500 command_runner.go:130] > kubeadm
	I0114 10:32:12.719509  110500 command_runner.go:130] > kubectl
	I0114 10:32:12.719515  110500 command_runner.go:130] > kubelet
	I0114 10:32:12.720035  110500 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:32:12.720098  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:32:12.726696  110500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0114 10:32:12.738909  110500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:32:12.751222  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes)
	I0114 10:32:12.763791  110500 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:32:12.766553  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:32:12.775632  110500 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822 for IP: 192.168.58.2
	I0114 10:32:12.775780  110500 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:32:12.775823  110500 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:32:12.775880  110500 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key
	I0114 10:32:12.775939  110500 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key.cee25041
	I0114 10:32:12.775975  110500 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key
	I0114 10:32:12.775986  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0114 10:32:12.775995  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0114 10:32:12.776009  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0114 10:32:12.776020  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0114 10:32:12.776030  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 10:32:12.776040  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 10:32:12.776050  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:32:12.776060  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:32:12.776095  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:32:12.776118  110500 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:32:12.776127  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:32:12.776146  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:32:12.776170  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:32:12.776190  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:32:12.776223  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:32:12.776254  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem -> /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.776268  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /usr/share/ca-certificates/103062.pem
	I0114 10:32:12.776276  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:12.776801  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:32:12.793649  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:32:12.809955  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:32:12.826165  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:32:12.842333  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:32:12.858766  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:32:12.874864  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:32:12.891037  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:32:12.907157  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:32:12.923498  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:32:12.940509  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:32:12.957076  110500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:32:12.969247  110500 ssh_runner.go:195] Run: openssl version
	I0114 10:32:12.973757  110500 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 10:32:12.973888  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:32:12.980925  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983712  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983767  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983798  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.988253  110500 command_runner.go:130] > 51391683
	I0114 10:32:12.988302  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:32:12.994808  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:32:13.001692  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004632  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004666  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004710  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.009155  110500 command_runner.go:130] > 3ec20f2e
	I0114 10:32:13.009284  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:32:13.015757  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:32:13.022799  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025630  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025669  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025717  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.030092  110500 command_runner.go:130] > b5213941
	I0114 10:32:13.030263  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:32:13.036698  110500 kubeadm.go:396] StartCluster: {Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:fals
e logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:32:13.036791  110500 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:32:13.036836  110500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:32:13.058223  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:13.058244  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:13.058251  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:13.058260  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:13.058269  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:13.058277  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:13.058286  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:13.058300  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:13.060045  110500 cri.go:87] found id: "8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	I0114 10:32:13.060064  110500 cri.go:87] found id: "dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653"
	I0114 10:32:13.060072  110500 cri.go:87] found id: "fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642"
	I0114 10:32:13.060078  110500 cri.go:87] found id: "6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2"
	I0114 10:32:13.060084  110500 cri.go:87] found id: "1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028"
	I0114 10:32:13.060094  110500 cri.go:87] found id: "9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:32:13.060102  110500 cri.go:87] found id: "72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf"
	I0114 10:32:13.060109  110500 cri.go:87] found id: "1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2"
	I0114 10:32:13.060124  110500 cri.go:87] found id: ""
	I0114 10:32:13.060170  110500 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0114 10:32:13.070971  110500 command_runner.go:130] > null
	I0114 10:32:13.071003  110500 cri.go:114] JSON = null
	W0114 10:32:13.071044  110500 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0114 10:32:13.071091  110500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:32:13.077185  110500 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0114 10:32:13.077211  110500 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0114 10:32:13.077219  110500 command_runner.go:130] > /var/lib/minikube/etcd:
	I0114 10:32:13.077224  110500 command_runner.go:130] > member
	I0114 10:32:13.077710  110500 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:32:13.077727  110500 kubeadm.go:627] restartCluster start
	I0114 10:32:13.077773  110500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:32:13.083937  110500 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.084292  110500 kubeconfig.go:135] verify returned: extract IP: "multinode-102822" does not appear in /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:13.084399  110500 kubeconfig.go:146] "multinode-102822" context is missing from /home/jenkins/minikube-integration/15642-3818/kubeconfig - will repair!
	I0114 10:32:13.084667  110500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:13.085127  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:13.085339  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:13.085718  110500 cert_rotation.go:137] Starting client certificate rotation controller
	I0114 10:32:13.085897  110500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:32:13.092314  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.092361  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.099983  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.300390  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.300471  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.309061  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.500394  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.500496  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.508843  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.700076  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.700158  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.708648  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.900982  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.901059  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.909312  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.100665  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.100752  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.108909  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.300163  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.300255  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.308427  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.500734  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.500820  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.509529  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.700875  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.700944  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.709372  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.900776  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.900858  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.909023  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.100211  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.100291  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.108635  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.300982  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.301056  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.309251  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.500594  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.500686  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.509036  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.700390  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.700477  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.709024  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.900241  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.900308  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.908659  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.101018  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:16.101096  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:16.109426  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.109444  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:16.109480  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:16.117151  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.117179  110500 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 10:32:16.117187  110500 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:32:16.117204  110500 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:32:16.117249  110500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:32:16.139034  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:16.139057  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:16.139065  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:16.139074  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:16.139082  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:16.139089  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:16.139097  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:16.139108  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.140873  110500 cri.go:87] found id: "8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	I0114 10:32:16.140897  110500 cri.go:87] found id: "dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653"
	I0114 10:32:16.140903  110500 cri.go:87] found id: "fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642"
	I0114 10:32:16.140909  110500 cri.go:87] found id: "6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2"
	I0114 10:32:16.140915  110500 cri.go:87] found id: "1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028"
	I0114 10:32:16.140925  110500 cri.go:87] found id: "9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:32:16.140940  110500 cri.go:87] found id: "72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf"
	I0114 10:32:16.140949  110500 cri.go:87] found id: "1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2"
	I0114 10:32:16.140963  110500 cri.go:87] found id: ""
	I0114 10:32:16.140974  110500 cri.go:232] Stopping containers: [8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653 fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2]
	I0114 10:32:16.141047  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:32:16.143873  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:32:16.143954  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653 fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.164186  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:16.164545  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:16.164961  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:16.165442  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:16.165866  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:16.166173  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:16.166530  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:16.166912  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.168495  110500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:32:16.178319  110500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:32:16.184529  110500 command_runner.go:130] > -rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	I0114 10:32:16.184551  110500 command_runner.go:130] > -rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.184560  110500 command_runner.go:130] > -rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	I0114 10:32:16.184578  110500 command_runner.go:130] > -rw------- 1 root root 5604 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	I0114 10:32:16.185073  110500 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	
	I0114 10:32:16.185115  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:32:16.191129  110500 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 10:32:16.191775  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:32:16.197622  110500 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 10:32:16.198190  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.204644  110500 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.204691  110500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.211037  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:32:16.217158  110500 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.217202  110500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 10:32:16.223266  110500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:32:16.229757  110500 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:32:16.229774  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:16.267698  110500 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:32:16.267727  110500 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0114 10:32:16.267829  110500 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0114 10:32:16.268005  110500 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 10:32:16.268198  110500 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0114 10:32:16.268305  110500 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0114 10:32:16.268411  110500 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0114 10:32:16.268622  110500 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0114 10:32:16.268811  110500 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0114 10:32:16.269005  110500 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 10:32:16.269151  110500 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 10:32:16.269270  110500 command_runner.go:130] > [certs] Using the existing "sa" key
	I0114 10:32:16.271593  110500 command_runner.go:130] ! W0114 10:32:16.262790     715 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:16.271629  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:16.308452  110500 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:32:16.591661  110500 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0114 10:32:16.834807  110500 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0114 10:32:16.917085  110500 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:32:16.963606  110500 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:32:16.966257  110500 command_runner.go:130] ! W0114 10:32:16.303745     726 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:16.966297  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.014855  110500 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:32:17.015614  110500 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:32:17.015700  110500 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 10:32:17.097994  110500 command_runner.go:130] ! W0114 10:32:16.998756     739 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.098089  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.134582  110500 command_runner.go:130] ! W0114 10:32:17.134094     774 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.147442  110500 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:32:17.147476  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:32:17.147487  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:32:17.147503  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:32:17.147521  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.236640  110500 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:32:17.242814  110500 command_runner.go:130] ! W0114 10:32:17.230809     792 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.242849  110500 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:32:17.242894  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:17.752564  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:18.252426  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:18.260999  110500 command_runner.go:130] > 1113
	I0114 10:32:18.261657  110500 api_server.go:71] duration metric: took 1.01880732s to wait for apiserver process to appear ...
	I0114 10:32:18.261681  110500 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:32:18.261693  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:21.029950  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 10:32:21.029985  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 10:32:21.530625  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:21.535017  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:32:21.535038  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:32:22.030583  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:22.034640  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:32:22.034667  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:32:22.530186  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:22.535299  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:32:22.535363  110500 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0114 10:32:22.535370  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:22.535378  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:22.535387  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:22.542430  110500 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0114 10:32:22.542456  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:22.542463  110500 round_trippers.go:580]     Audit-Id: 1aad19f3-6767-4611-a5ba-372dd35e9aaa
	I0114 10:32:22.542469  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:22.542478  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:22.542486  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:22.542495  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:22.542501  110500 round_trippers.go:580]     Content-Length: 263
	I0114 10:32:22.542510  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 10:32:22.542548  110500 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 10:32:22.542642  110500 api_server.go:140] control plane version: v1.25.3
	I0114 10:32:22.542659  110500 api_server.go:130] duration metric: took 4.280973232s to wait for apiserver health ...
	I0114 10:32:22.542670  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:32:22.542681  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:32:22.544760  110500 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 10:32:22.546388  110500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:32:22.549910  110500 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 10:32:22.549962  110500 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 10:32:22.549975  110500 command_runner.go:130] > Device: 34h/52d	Inode: 565966      Links: 1
	I0114 10:32:22.549983  110500 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:32:22.549994  110500 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:32:22.550002  110500 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:32:22.550010  110500 command_runner.go:130] > Change: 2023-01-14 10:06:59.488187836 +0000
	I0114 10:32:22.550015  110500 command_runner.go:130] >  Birth: -
	I0114 10:32:22.550073  110500 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:32:22.550086  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:32:22.563145  110500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:32:23.556622  110500 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:32:23.558324  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:32:23.559964  110500 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 10:32:23.572522  110500 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 10:32:23.576291  110500 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.013116951s)
	I0114 10:32:23.576318  110500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:32:23.576411  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:23.576418  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.576426  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.576434  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.580021  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:23.580051  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.580062  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.580070  110500 round_trippers.go:580]     Audit-Id: d0301af9-af93-4972-a603-26d225a78b49
	I0114 10:32:23.580078  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.580086  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.580101  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.580109  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.580814  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"684"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84105 chars]
	I0114 10:32:23.586341  110500 system_pods.go:59] 12 kube-system pods found
	I0114 10:32:23.586386  110500 system_pods.go:61] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:23.586402  110500 system_pods.go:61] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 10:32:23.586417  110500 system_pods.go:61] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:23.586424  110500 system_pods.go:61] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:23.586434  110500 system_pods.go:61] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0114 10:32:23.586442  110500 system_pods.go:61] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:23.586451  110500 system_pods.go:61] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 10:32:23.586463  110500 system_pods.go:61] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:23.586473  110500 system_pods.go:61] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:23.586480  110500 system_pods.go:61] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:23.586490  110500 system_pods.go:61] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:23.586499  110500 system_pods.go:61] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running
	I0114 10:32:23.586505  110500 system_pods.go:74] duration metric: took 10.181942ms to wait for pod list to return data ...
	I0114 10:32:23.586518  110500 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:32:23.586586  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:23.586597  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.586606  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.586613  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.588779  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.588796  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.588806  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.588815  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.588826  110500 round_trippers.go:580]     Audit-Id: 6ef374f5-6c43-4398-b878-dabcf026fa21
	I0114 10:32:23.588834  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.588845  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.588857  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.589115  110500 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"684"},"items":[{"metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15962 chars]
	I0114 10:32:23.589921  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589939  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589950  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589954  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589958  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589961  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589966  110500 node_conditions.go:105] duration metric: took 3.442775ms to run NodePressure ...
	I0114 10:32:23.589987  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:23.691341  110500 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0114 10:32:23.734463  110500 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0114 10:32:23.736834  110500 command_runner.go:130] ! W0114 10:32:23.629296    1801 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:23.736875  110500 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 10:32:23.736963  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0114 10:32:23.736973  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.736985  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.736994  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.739456  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.739481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.739491  110500 round_trippers.go:580]     Audit-Id: c46d33b0-2b93-4009-a7b0-a83f39889d32
	I0114 10:32:23.739500  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.739509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.739522  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.739534  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.739546  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.739840  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"686"},"items":[{"metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30422 chars]
	I0114 10:32:23.740800  110500 kubeadm.go:778] kubelet initialised
	I0114 10:32:23.740814  110500 kubeadm.go:779] duration metric: took 3.928883ms waiting for restarted kubelet to initialise ...
	I0114 10:32:23.740821  110500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:23.740868  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:23.740876  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.740885  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.740894  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.743447  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.743463  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.743469  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.743485  110500 round_trippers.go:580]     Audit-Id: c6bded4f-4aa4-42be-a4e4-20ebe0546a46
	I0114 10:32:23.743493  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.743500  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.743509  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.743518  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.744192  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"686"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84105 chars]
	I0114 10:32:23.746598  110500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.746657  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:23.746664  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.746672  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.746681  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.748217  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.748238  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.748245  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.748253  110500 round_trippers.go:580]     Audit-Id: d1d08fdc-7445-4898-b7eb-6476beda912d
	I0114 10:32:23.748262  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.748274  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.748286  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.748294  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.748395  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6329 chars]
	I0114 10:32:23.748815  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:23.748828  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.748835  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.748845  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.750211  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.750225  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.750232  110500 round_trippers.go:580]     Audit-Id: a95fc07c-b593-46bf-8f30-63ff02257647
	I0114 10:32:23.750240  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.750248  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.750257  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.750272  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.750283  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.750383  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:23.750658  110500 pod_ready.go:92] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:23.750671  110500 pod_ready.go:81] duration metric: took 4.054192ms waiting for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.750678  110500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.750715  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:23.750722  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.750729  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.750734  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.752197  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.752212  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.752220  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.752226  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.752231  110500 round_trippers.go:580]     Audit-Id: e9cb9640-9e23-42f8-94a1-c70e896a63a2
	I0114 10:32:23.752237  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.752246  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.752262  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.752376  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:23.752697  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:23.752709  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.752716  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.752722  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.754027  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.754047  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.754056  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.754064  110500 round_trippers.go:580]     Audit-Id: 83d20878-ef82-4af2-a8ed-28c1b8299d89
	I0114 10:32:23.754073  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.754085  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.754093  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.754103  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.754191  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:24.255300  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:24.255338  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.255348  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.255355  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.257483  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:24.257502  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.257509  110500 round_trippers.go:580]     Audit-Id: d39e3692-47b7-4e86-ada1-da6bc3a167a8
	I0114 10:32:24.257517  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.257526  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.257536  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.257548  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.257562  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.257672  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:24.258113  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:24.258127  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.258138  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.258147  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.259968  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.259991  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.260002  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.260012  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.260020  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.260026  110500 round_trippers.go:580]     Audit-Id: a0b0bcf0-c6b9-4f99-aedf-f72364dcfbaf
	I0114 10:32:24.260033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.260047  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.260220  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:24.754715  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:24.754736  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.754744  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.754750  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.756730  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.756773  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.756783  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.756791  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.756800  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.756811  110500 round_trippers.go:580]     Audit-Id: df0a6fa2-a310-40fd-976c-91df137ef1ec
	I0114 10:32:24.756823  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.756832  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.756952  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:24.757338  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:24.757350  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.757357  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.757363  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.758957  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.758976  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.758984  110500 round_trippers.go:580]     Audit-Id: d4a78893-a75c-47a5-9145-000537d9e421
	I0114 10:32:24.758993  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.759002  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.759013  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.759023  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.759039  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.759147  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.254722  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:25.254743  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.254751  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.254758  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.256711  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.256735  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.256747  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.256756  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.256765  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.256773  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.256785  110500 round_trippers.go:580]     Audit-Id: b0b46cf3-6548-4625-889c-7ab1f6b91f5f
	I0114 10:32:25.256797  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.256916  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:25.257361  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:25.257375  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.257382  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.257392  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.258984  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.259007  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.259013  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.259019  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.259027  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.259036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.259052  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.259060  110500 round_trippers.go:580]     Audit-Id: 055c2428-6dda-4feb-871a-f78137c59674
	I0114 10:32:25.259182  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.754723  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:25.754750  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.754758  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.754764  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.756888  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:25.756905  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.756912  110500 round_trippers.go:580]     Audit-Id: 5ead797e-e2a7-46ae-8b8e-71c88b5db5b4
	I0114 10:32:25.756917  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.756923  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.756932  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.756941  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.756975  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.757091  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:25.757536  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:25.757549  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.757558  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.757564  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.759116  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.759139  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.759149  110500 round_trippers.go:580]     Audit-Id: c137e88b-c781-4c64-bb12-5a1558b3c42d
	I0114 10:32:25.759158  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.759166  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.759173  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.759181  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.759186  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.759337  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.759698  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:26.255190  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:26.255211  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.255222  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.255230  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.257295  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:26.257321  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.257331  110500 round_trippers.go:580]     Audit-Id: 4118a105-2c79-4fa0-a2d9-f41e62a1936d
	I0114 10:32:26.257341  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.257349  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.257356  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.257361  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.257366  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.257473  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:26.257881  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:26.257893  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.257900  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.257906  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.259401  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:26.259416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.259422  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.259427  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.259434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.259443  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.259452  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.259463  110500 round_trippers.go:580]     Audit-Id: e165b0e4-85ed-42f7-8b3b-16d681c452ff
	I0114 10:32:26.259579  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:26.755171  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:26.755193  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.755204  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.755212  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.757390  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:26.757416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.757426  110500 round_trippers.go:580]     Audit-Id: 20047ef4-60e6-42c1-ac1d-2be32965f108
	I0114 10:32:26.757437  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.757445  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.757457  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.757470  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.757479  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.757601  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:26.757998  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:26.758011  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.758019  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.758025  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.759714  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:26.759735  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.759745  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.759750  110500 round_trippers.go:580]     Audit-Id: 4dee5e22-75a1-4049-a04f-14302d303af1
	I0114 10:32:26.759756  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.759764  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.759770  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.759779  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.759917  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.255456  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:27.255476  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.255485  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.255491  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.257395  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.257416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.257422  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.257428  110500 round_trippers.go:580]     Audit-Id: 8b584991-7a59-4031-a6ee-ed36b8d982da
	I0114 10:32:27.257433  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.257438  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.257444  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.257450  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.257554  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:27.257971  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:27.257985  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.257995  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.258004  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.259540  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.259560  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.259570  110500 round_trippers.go:580]     Audit-Id: c61cc11c-4f94-4185-85ee-04dcc2eaf2c6
	I0114 10:32:27.259579  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.259588  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.259599  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.259608  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.259619  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.259770  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.755355  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:27.755375  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.755388  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.755394  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.757497  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:27.757520  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.757531  110500 round_trippers.go:580]     Audit-Id: 3ae75ec2-a8ef-4917-afc8-ef9aa3d382cd
	I0114 10:32:27.757540  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.757549  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.757558  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.757578  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.757589  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.757717  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:27.758225  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:27.758239  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.758250  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.758260  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.759856  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.759873  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.759881  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.759886  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.759891  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.759897  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.759904  110500 round_trippers.go:580]     Audit-Id: 2167b85d-7304-4b18-982d-1ea14fbc5a03
	I0114 10:32:27.759909  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.760031  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.760332  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:28.255596  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:28.255615  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.255623  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.255629  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.257537  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.257560  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.257570  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.257578  110500 round_trippers.go:580]     Audit-Id: 03d4795a-bb14-49fe-8c24-291b677b4317
	I0114 10:32:28.257585  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.257593  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.257602  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.257611  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.257746  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:28.258130  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:28.258143  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.258153  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.258163  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.259792  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.259811  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.259821  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.259828  110500 round_trippers.go:580]     Audit-Id: 6e41dfe8-826e-4c61-9d51-16e6b88d0c61
	I0114 10:32:28.259836  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.259845  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.259854  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.259864  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.260018  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:28.755686  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:28.755715  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.755727  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.755738  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.757862  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:28.757882  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.757892  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.757899  110500 round_trippers.go:580]     Audit-Id: 09283aa4-913b-44ca-ac73-1a2c219fa6d2
	I0114 10:32:28.757907  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.757916  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.757924  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.757940  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.758111  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:28.758529  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:28.758541  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.758552  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.758561  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.760208  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.760232  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.760243  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.760252  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.760262  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.760275  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.760280  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.760286  110500 round_trippers.go:580]     Audit-Id: 79e70e8c-2b2b-460d-8f0d-0a3d61924cb6
	I0114 10:32:28.760370  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:29.254944  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:29.254966  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.254974  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.254980  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.256978  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.256998  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.257007  110500 round_trippers.go:580]     Audit-Id: 6e8024a5-8df6-485d-a1cc-c9a6aaec52b9
	I0114 10:32:29.257018  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.257029  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.257036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.257045  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.257060  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.257165  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:29.257549  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:29.257561  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.257569  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.257575  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.259222  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.259239  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.259248  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.259256  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.259264  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.259273  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.259286  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.259299  110500 round_trippers.go:580]     Audit-Id: e5165d7a-cea8-4461-96bf-7372805a0bad
	I0114 10:32:29.259430  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:29.755550  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:29.755572  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.755582  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.755591  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.757508  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.757531  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.757541  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.757549  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.757556  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.757565  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.757577  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.757590  110500 round_trippers.go:580]     Audit-Id: 165f76d6-f2b4-485c-a16c-536de0a4d900
	I0114 10:32:29.757704  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:29.758097  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:29.758111  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.758121  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.758130  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.759815  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.759840  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.759851  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.759860  110500 round_trippers.go:580]     Audit-Id: 2334a4fc-ab40-46b8-9a67-f8c6ee0a221f
	I0114 10:32:29.759868  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.759881  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.759890  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.759901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.760019  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:30.255612  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:30.255638  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.255649  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.255657  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.257668  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.257686  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.257694  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.257700  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.257705  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.257713  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.257721  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.257739  110500 round_trippers.go:580]     Audit-Id: 8b29845a-1bac-4410-8ec6-f6d50426573e
	I0114 10:32:30.257848  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:30.258249  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:30.258262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.258269  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.258277  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.260013  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.260035  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.260046  110500 round_trippers.go:580]     Audit-Id: da3743d4-cd4c-437a-b1cc-e53d3ce1d217
	I0114 10:32:30.260055  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.260069  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.260078  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.260090  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.260101  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.260216  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:30.260612  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:30.754777  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:30.754797  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.754807  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.754814  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.756907  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:30.756932  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.756943  110500 round_trippers.go:580]     Audit-Id: 0e68a683-03ba-4f20-9b66-357e1ebd6f7a
	I0114 10:32:30.756952  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.756964  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.756975  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.756985  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.756997  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.757115  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:30.757633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:30.757652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.757663  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.757674  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.759328  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.759350  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.759360  110500 round_trippers.go:580]     Audit-Id: 79b5c42f-d5ee-473f-9ef2-7ddbd23be82b
	I0114 10:32:30.759369  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.759380  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.759393  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.759402  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.759415  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.759552  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:31.255085  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:31.255105  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.255125  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.255133  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.257142  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.257166  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.257173  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.257179  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.257188  110500 round_trippers.go:580]     Audit-Id: 5b02cf04-91ee-4c3f-b2b9-a589aad94bae
	I0114 10:32:31.257196  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.257207  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.257224  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.257348  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:31.257765  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:31.257780  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.257791  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.257808  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.259361  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.259385  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.259394  110500 round_trippers.go:580]     Audit-Id: 7c3adabb-9744-4a02-b2bc-e2aec5b89a83
	I0114 10:32:31.259403  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.259412  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.259424  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.259435  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.259447  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.259532  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:31.755109  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:31.755130  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.755138  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.755145  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.757374  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:31.757401  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.757411  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.757418  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.757425  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.757432  110500 round_trippers.go:580]     Audit-Id: c0266278-a5dd-4e1b-af76-c45306fd69fe
	I0114 10:32:31.757440  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.757450  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.757613  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:31.757997  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:31.758009  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.758016  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.758022  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.759699  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.759725  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.759735  110500 round_trippers.go:580]     Audit-Id: 65120ef6-edb7-4836-9391-4ac8e7c2ed70
	I0114 10:32:31.759745  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.759758  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.759770  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.759783  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.759792  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.759887  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.255485  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:32.255506  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.255517  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.255525  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.257491  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.257519  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.257530  110500 round_trippers.go:580]     Audit-Id: f09b3877-d0df-4034-8fb4-90ce1a1bd2de
	I0114 10:32:32.257540  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.257552  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.257561  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.257573  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.257579  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.257716  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:32.258163  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:32.258178  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.258185  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.258192  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.259808  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.259830  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.259840  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.259849  110500 round_trippers.go:580]     Audit-Id: 6127838b-d1d4-40db-901d-1116a8eeaaae
	I0114 10:32:32.259862  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.259871  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.259884  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.259893  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.260004  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.755587  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:32.755609  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.755618  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.755625  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.757750  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:32.757772  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.757782  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.757791  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.757801  110500 round_trippers.go:580]     Audit-Id: fd727de4-8b43-4307-abce-74fef66a240a
	I0114 10:32:32.757812  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.757824  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.757833  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.757927  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:32.758367  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:32.758380  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.758387  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.758394  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.760252  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.760275  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.760285  110500 round_trippers.go:580]     Audit-Id: 1ba406ab-1408-47f3-9f7b-d83a23d1d995
	I0114 10:32:32.760294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.760303  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.760311  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.760320  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.760333  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.760453  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.760759  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:33.255183  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:33.255203  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.255216  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.255227  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.256935  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:33.256955  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.256965  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.256974  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.256983  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.256997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.257006  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.257015  110500 round_trippers.go:580]     Audit-Id: f707a20e-84c0-4220-8de1-a410d53bbbd2
	I0114 10:32:33.257114  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:33.257633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:33.257652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.257664  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.257673  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.261003  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:33.261025  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.261038  110500 round_trippers.go:580]     Audit-Id: 55c5ec23-8f33-46de-a5f1-ca14186a4547
	I0114 10:32:33.261047  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.261055  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.261067  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.261079  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.261089  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.261197  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:33.754764  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:33.754788  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.754796  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.754802  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.756951  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:33.756974  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.756990  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.756999  110500 round_trippers.go:580]     Audit-Id: 3b0d4e77-45b8-44ee-b4f3-262610cdf21f
	I0114 10:32:33.757012  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.757024  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.757036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.757049  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.757166  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:33.757602  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:33.757615  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.757622  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.757628  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.759206  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:33.759227  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.759236  110500 round_trippers.go:580]     Audit-Id: bb4508cd-9c1a-4d46-a92a-ce006889479a
	I0114 10:32:33.759246  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.759255  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.759267  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.759279  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.759292  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.759396  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:34.254745  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:34.254766  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.254774  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.254780  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.256865  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:34.256890  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.256901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.256910  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.256918  110500 round_trippers.go:580]     Audit-Id: 19099497-60b0-4de7-a3a8-250a4c3230ae
	I0114 10:32:34.256927  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.256937  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.256949  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.257056  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:34.257478  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:34.257492  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.257500  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.257507  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.259093  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:34.259116  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.259125  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.259135  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.259147  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.259156  110500 round_trippers.go:580]     Audit-Id: 652cb597-28f1-4a27-a56c-a6bd5d19765f
	I0114 10:32:34.259168  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.259179  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.259278  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:34.754848  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:34.754881  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.754890  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.754897  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.757014  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:34.757041  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.757052  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.757061  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.757067  110500 round_trippers.go:580]     Audit-Id: c80617eb-7d4c-4c6e-9079-9c6bcb1f5c04
	I0114 10:32:34.757074  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.757083  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.757093  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.757298  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:34.757687  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:34.757699  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.757707  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.757713  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.759337  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:34.759354  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.759360  110500 round_trippers.go:580]     Audit-Id: 7b4eee6e-6983-4da2-a62a-b725116b5647
	I0114 10:32:34.759366  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.759371  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.759379  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.759388  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.759399  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.759513  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:35.254779  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:35.254804  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.254816  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.254825  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.256971  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:35.256996  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.257004  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.257010  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.257016  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.257025  110500 round_trippers.go:580]     Audit-Id: 0d3539bd-6778-49cc-96e9-9bad1309f553
	I0114 10:32:35.257033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.257045  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.257162  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:35.257610  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:35.257623  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.257631  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.257637  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.259224  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:35.259240  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.259247  110500 round_trippers.go:580]     Audit-Id: 0ae81ca5-2444-4e02-9a52-f90078681427
	I0114 10:32:35.259255  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.259263  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.259275  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.259288  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.259299  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.259411  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:35.259731  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:35.754968  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:35.754991  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.754999  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.755005  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.757108  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:35.757134  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.757141  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.757147  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.757153  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.757158  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.757164  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.757169  110500 round_trippers.go:580]     Audit-Id: 346069ca-82c5-48d2-9563-9b2ddbf48dc1
	I0114 10:32:35.757280  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:35.757685  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:35.757698  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.757706  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.757712  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.759339  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:35.759361  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.759371  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.759380  110500 round_trippers.go:580]     Audit-Id: 4f47e4ad-f84c-4e00-b267-67aaa09518dc
	I0114 10:32:35.759390  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.759403  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.759408  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.759420  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.759543  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.255483  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:36.255502  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.255510  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.255516  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.257488  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.257509  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.257516  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.257521  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.257527  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.257532  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.257537  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.257542  110500 round_trippers.go:580]     Audit-Id: 686cc3bb-197f-4b71-8669-c9c811866ccb
	I0114 10:32:36.257662  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"777","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6035 chars]
	I0114 10:32:36.258125  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.258140  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.258151  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.258161  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.259632  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.259648  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.259654  110500 round_trippers.go:580]     Audit-Id: 88ae3e7f-6045-49d8-bb2d-357c67401973
	I0114 10:32:36.259660  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.259667  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.259691  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.259703  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.259722  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.259868  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.260159  110500 pod_ready.go:92] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.260181  110500 pod_ready.go:81] duration metric: took 12.509495988s waiting for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.260205  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.260261  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102822
	I0114 10:32:36.260270  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.260282  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.260297  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.261791  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.261809  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.261818  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.261826  110500 round_trippers.go:580]     Audit-Id: 5634a153-6fb5-404f-a137-f53afacc1245
	I0114 10:32:36.261834  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.261846  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.261855  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.261869  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.262026  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102822","namespace":"kube-system","uid":"c74a88c9-d603-4a80-a194-de75c8d0a3a5","resourceVersion":"770","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.mirror":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.seen":"2023-01-14T10:28:42.123458577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8421 chars]
	I0114 10:32:36.262461  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.262472  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.262479  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.262487  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.263929  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.263945  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.263954  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.263962  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.263970  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.263982  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.263995  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.264005  110500 round_trippers.go:580]     Audit-Id: e55d109e-705e-470a-9744-b5583c449686
	I0114 10:32:36.264139  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.264398  110500 pod_ready.go:92] pod "kube-apiserver-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.264408  110500 pod_ready.go:81] duration metric: took 4.192145ms waiting for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.264418  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.264453  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102822
	I0114 10:32:36.264460  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.264467  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.264474  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.266038  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.266055  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.266064  110500 round_trippers.go:580]     Audit-Id: 47324d68-80de-4990-a022-d7d52a3fcbf0
	I0114 10:32:36.266071  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.266079  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.266088  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.266097  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.266107  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.266215  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102822","namespace":"kube-system","uid":"85c8a264-96f3-4fcf-affd-917b94bdd177","resourceVersion":"774","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.mirror":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.seen":"2023-01-14T10:28:42.123460297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7996 chars]
	I0114 10:32:36.266578  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.266591  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.266601  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.266611  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.267963  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.267978  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.267984  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.267990  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.267996  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.268005  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.268017  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.268028  110500 round_trippers.go:580]     Audit-Id: 93002af6-97df-471f-95fa-3d5e668e2fca
	I0114 10:32:36.268120  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.268373  110500 pod_ready.go:92] pod "kube-controller-manager-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.268384  110500 pod_ready.go:81] duration metric: took 3.959272ms waiting for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.268394  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.268427  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d5n6
	I0114 10:32:36.268434  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.268441  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.268447  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.269931  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.269946  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.269953  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.269959  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.269964  110500 round_trippers.go:580]     Audit-Id: 51ab0589-c66b-4eab-b16d-8834f2151d9a
	I0114 10:32:36.269972  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.269981  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.269997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.270077  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4d5n6","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dba561b-e827-4a6e-afd9-11c68b7e4447","resourceVersion":"471","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5522 chars]
	I0114 10:32:36.270392  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:32:36.270402  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.270408  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.270414  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.271787  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.271805  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.271811  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.271817  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.271823  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.271828  110500 round_trippers.go:580]     Audit-Id: 126c3c4b-b18f-441c-b869-90363ea3dee2
	I0114 10:32:36.271833  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.271838  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.271935  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4","resourceVersion":"549","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1" [truncated 4430 chars]
	I0114 10:32:36.272154  110500 pod_ready.go:92] pod "kube-proxy-4d5n6" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.272165  110500 pod_ready.go:81] duration metric: took 3.765849ms waiting for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.272172  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.272210  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:36.272218  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.272224  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.272230  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.273735  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.273750  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.273760  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.273769  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.273784  110500 round_trippers.go:580]     Audit-Id: e57ac6dc-00b8-4e56-8601-76f0d7bbb22c
	I0114 10:32:36.273797  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.273809  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.273819  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.273928  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bzd24","generateName":"kube-proxy-","namespace":"kube-system","uid":"3191f786-4823-486a-90e6-be1b1180c23a","resourceVersion":"660","creationTimestamp":"2023-01-14T10:30:20Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 10:32:36.274311  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:36.274329  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.274336  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.274342  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.275632  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.275645  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.275652  110500 round_trippers.go:580]     Audit-Id: 8bcc2f0b-ac50-4161-9c26-9bb0097ebfb8
	I0114 10:32:36.275657  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.275663  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.275668  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.275697  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.275708  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.275877  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m03","uid":"7fb9d125-0cce-4853-b9a9-9348e20e7ae7","resourceVersion":"674","creationTimestamp":"2023-01-14T10:31:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu
mes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{"." [truncated 4248 chars]
	I0114 10:32:36.276181  110500 pod_ready.go:92] pod "kube-proxy-bzd24" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.276195  110500 pod_ready.go:81] duration metric: took 4.017618ms waiting for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.276205  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.455534  110500 request.go:614] Waited for 179.275116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:36.455598  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:36.455602  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.455629  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.455639  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.457708  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.457736  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.457747  110500 round_trippers.go:580]     Audit-Id: 54c225ad-ae45-474b-b73c-0a4296e75b17
	I0114 10:32:36.457756  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.457763  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.457775  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.457794  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.457803  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.457939  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlcll","generateName":"kube-proxy-","namespace":"kube-system","uid":"91e05737-5cbf-404c-8b7c-75045f584885","resourceVersion":"718","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5727 chars]
	I0114 10:32:36.655705  110500 request.go:614] Waited for 197.282889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.655766  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.655771  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.655779  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.655786  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.658009  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.658030  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.658044  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.658052  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.658061  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.658068  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.658077  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.658088  110500 round_trippers.go:580]     Audit-Id: 9962cd09-553e-4e94-9f81-8a21b65473fa
	I0114 10:32:36.658176  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.658482  110500 pod_ready.go:92] pod "kube-proxy-qlcll" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.658493  110500 pod_ready.go:81] duration metric: took 382.279914ms waiting for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.658501  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.855574  110500 request.go:614] Waited for 196.992212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:36.855633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:36.855641  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.855652  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.855660  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.857870  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.857889  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.857896  110500 round_trippers.go:580]     Audit-Id: 9d2c7405-2f70-45e4-b3b5-c264d1b3fc4f
	I0114 10:32:36.857902  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.857907  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.857913  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.857918  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.857924  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.858061  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102822","namespace":"kube-system","uid":"63ee442e-88de-44d9-8512-98c56f1b4942","resourceVersion":"725","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.mirror":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.seen":"2023-01-14T10:28:42.123461701Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4878 chars]
	I0114 10:32:37.055720  110500 request.go:614] Waited for 197.266354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.055769  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.055774  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.055787  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.055797  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.057958  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.057979  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.057988  110500 round_trippers.go:580]     Audit-Id: 1580fd75-e4e0-4a4a-9791-aff04e65f15c
	I0114 10:32:37.057993  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.058002  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.058008  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.058015  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.058021  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.058137  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:37.058454  110500 pod_ready.go:92] pod "kube-scheduler-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:37.058468  110500 pod_ready.go:81] duration metric: took 399.960714ms waiting for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:37.058477  110500 pod_ready.go:38] duration metric: took 13.317646399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:37.058494  110500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:32:37.065465  110500 command_runner.go:130] > -16
	I0114 10:32:37.065504  110500 ops.go:34] apiserver oom_adj: -16
	I0114 10:32:37.065514  110500 kubeadm.go:631] restartCluster took 23.987780678s
	I0114 10:32:37.065526  110500 kubeadm.go:398] StartCluster complete in 24.028830611s
	I0114 10:32:37.065550  110500 settings.go:142] acquiring lock: {Name:mk1c1a895c03873155a8c7da5f3762b351f9952d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:37.065670  110500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.066259  110500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:37.066720  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.066964  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:37.067294  110500 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0114 10:32:37.067309  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.067324  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.067333  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.069540  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.069555  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.069562  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.069567  110500 round_trippers.go:580]     Content-Length: 291
	I0114 10:32:37.069573  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.069578  110500 round_trippers.go:580]     Audit-Id: 72d12c09-7a9f-482a-ba0b-2b59f789418c
	I0114 10:32:37.069583  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.069588  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.069594  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.069612  110500 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a0ae11c7-3256-4ef8-a0cd-ff11f2de358a","resourceVersion":"753","creationTimestamp":"2023-01-14T10:28:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0114 10:32:37.069762  110500 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-102822" rescaled to 1
	I0114 10:32:37.069822  110500 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:32:37.072149  110500 out.go:177] * Verifying Kubernetes components...
	I0114 10:32:37.069850  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 10:32:37.069869  110500 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0114 10:32:37.070096  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:37.073566  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:32:37.073588  110500 addons.go:65] Setting storage-provisioner=true in profile "multinode-102822"
	I0114 10:32:37.073605  110500 addons.go:65] Setting default-storageclass=true in profile "multinode-102822"
	I0114 10:32:37.073612  110500 addons.go:227] Setting addon storage-provisioner=true in "multinode-102822"
	W0114 10:32:37.073620  110500 addons.go:236] addon storage-provisioner should already be in state true
	I0114 10:32:37.073683  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:32:37.073623  110500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-102822"
	I0114 10:32:37.073995  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.074114  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.083738  110500 node_ready.go:35] waiting up to 6m0s for node "multinode-102822" to be "Ready" ...
	I0114 10:32:37.101957  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.102248  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:37.104720  110500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:32:37.102705  110500 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0114 10:32:37.106493  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.106510  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.106521  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.106628  110500 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:32:37.106646  110500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 10:32:37.106697  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:37.108695  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.108732  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.108744  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.108754  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.108763  110500 round_trippers.go:580]     Content-Length: 1273
	I0114 10:32:37.108775  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.108784  110500 round_trippers.go:580]     Audit-Id: 0ea79256-b1ab-4ac5-8466-82be87c881b8
	I0114 10:32:37.108794  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.108800  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.108831  110500 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0114 10:32:37.109300  110500 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 10:32:37.109347  110500 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0114 10:32:37.109351  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.109359  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.109368  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.109374  110500 round_trippers.go:473]     Content-Type: application/json
	I0114 10:32:37.112948  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:37.112965  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.112974  110500 round_trippers.go:580]     Audit-Id: bc58afbd-603f-4591-8a19-d9db28fda25c
	I0114 10:32:37.112983  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.112992  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.113004  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.113014  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.113024  110500 round_trippers.go:580]     Content-Length: 1220
	I0114 10:32:37.113030  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.113076  110500 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 10:32:37.113223  110500 addons.go:227] Setting addon default-storageclass=true in "multinode-102822"
	W0114 10:32:37.113242  110500 addons.go:236] addon default-storageclass should already be in state true
	I0114 10:32:37.113268  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:32:37.113633  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.134182  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:37.140584  110500 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 10:32:37.140618  110500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 10:32:37.140684  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:37.142257  110500 command_runner.go:130] > apiVersion: v1
	I0114 10:32:37.142279  110500 command_runner.go:130] > data:
	I0114 10:32:37.142286  110500 command_runner.go:130] >   Corefile: |
	I0114 10:32:37.142291  110500 command_runner.go:130] >     .:53 {
	I0114 10:32:37.142298  110500 command_runner.go:130] >         errors
	I0114 10:32:37.142306  110500 command_runner.go:130] >         health {
	I0114 10:32:37.142312  110500 command_runner.go:130] >            lameduck 5s
	I0114 10:32:37.142318  110500 command_runner.go:130] >         }
	I0114 10:32:37.142329  110500 command_runner.go:130] >         ready
	I0114 10:32:37.142340  110500 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0114 10:32:37.142350  110500 command_runner.go:130] >            pods insecure
	I0114 10:32:37.142361  110500 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0114 10:32:37.142372  110500 command_runner.go:130] >            ttl 30
	I0114 10:32:37.142378  110500 command_runner.go:130] >         }
	I0114 10:32:37.142383  110500 command_runner.go:130] >         prometheus :9153
	I0114 10:32:37.142396  110500 command_runner.go:130] >         hosts {
	I0114 10:32:37.142408  110500 command_runner.go:130] >            192.168.58.1 host.minikube.internal
	I0114 10:32:37.142415  110500 command_runner.go:130] >            fallthrough
	I0114 10:32:37.142424  110500 command_runner.go:130] >         }
	I0114 10:32:37.142432  110500 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0114 10:32:37.142443  110500 command_runner.go:130] >            max_concurrent 1000
	I0114 10:32:37.142452  110500 command_runner.go:130] >         }
	I0114 10:32:37.142458  110500 command_runner.go:130] >         cache 30
	I0114 10:32:37.142467  110500 command_runner.go:130] >         loop
	I0114 10:32:37.142476  110500 command_runner.go:130] >         reload
	I0114 10:32:37.142486  110500 command_runner.go:130] >         loadbalance
	I0114 10:32:37.142492  110500 command_runner.go:130] >     }
	I0114 10:32:37.142501  110500 command_runner.go:130] > kind: ConfigMap
	I0114 10:32:37.142507  110500 command_runner.go:130] > metadata:
	I0114 10:32:37.142520  110500 command_runner.go:130] >   creationTimestamp: "2023-01-14T10:28:42Z"
	I0114 10:32:37.142530  110500 command_runner.go:130] >   name: coredns
	I0114 10:32:37.142540  110500 command_runner.go:130] >   namespace: kube-system
	I0114 10:32:37.142549  110500 command_runner.go:130] >   resourceVersion: "369"
	I0114 10:32:37.142554  110500 command_runner.go:130] >   uid: 348659ae-af6c-4ae1-ba1c-2468636d5cd9
	I0114 10:32:37.142667  110500 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 10:32:37.166923  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:37.232895  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:32:37.255727  110500 request.go:614] Waited for 171.896659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.255799  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.255808  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.255816  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.255826  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.257966  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.257995  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.258005  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.258014  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.258024  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.258037  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.258048  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.258056  110500 round_trippers.go:580]     Audit-Id: 26a9ffda-d3c6-41ff-9b64-02b9f68339e0
	I0114 10:32:37.258212  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:37.258658  110500 node_ready.go:49] node "multinode-102822" has status "Ready":"True"
	I0114 10:32:37.258684  110500 node_ready.go:38] duration metric: took 174.906774ms waiting for node "multinode-102822" to be "Ready" ...
	I0114 10:32:37.258695  110500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:37.261672  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 10:32:37.455739  110500 request.go:614] Waited for 196.934939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:37.455812  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:37.455819  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.455845  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.455855  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.459944  110500 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 10:32:37.459977  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.459987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.459996  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.460006  110500 round_trippers.go:580]     Audit-Id: 9ef79377-2b98-4dcf-b71e-74e70cc74bad
	I0114 10:32:37.460014  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.460024  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.460035  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.461304  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84721 chars]
	I0114 10:32:37.465016  110500 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:37.474891  110500 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0114 10:32:37.476850  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0114 10:32:37.478813  110500 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 10:32:37.480672  110500 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 10:32:37.521681  110500 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0114 10:32:37.531444  110500 command_runner.go:130] > pod/storage-provisioner configured
	I0114 10:32:37.535193  110500 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0114 10:32:37.537991  110500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 10:32:37.539317  110500 addons.go:488] enableAddons completed in 469.451083ms
	I0114 10:32:37.656457  110500 request.go:614] Waited for 191.361178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:37.656516  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:37.656521  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.656528  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.656535  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.658982  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.659003  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.659010  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.659016  110500 round_trippers.go:580]     Audit-Id: 594f46bb-9c2a-47db-b0bd-2919bd22e370
	I0114 10:32:37.659022  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.659028  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.659035  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.659043  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.659176  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:37.855996  110500 request.go:614] Waited for 196.354517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.856061  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.856072  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.856083  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.856096  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.858291  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.858314  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.858321  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.858327  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.858332  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.858337  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.858343  110500 round_trippers.go:580]     Audit-Id: 5b4e38d1-702e-4a2c-b31b-d2ebda836842
	I0114 10:32:37.858350  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.858523  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:38.359596  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:38.359618  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.359626  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.359633  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.361829  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:38.361851  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.361862  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.361871  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.361880  110500 round_trippers.go:580]     Audit-Id: 4553d3a6-d5cb-414b-8f19-6e8030cb3318
	I0114 10:32:38.361891  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.361901  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.361912  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.362088  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:38.362557  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:38.362569  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.362576  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.362582  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.364247  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:38.364267  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.364277  110500 round_trippers.go:580]     Audit-Id: 8e182689-b157-467d-aa0f-9d9b888d9608
	I0114 10:32:38.364294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.364306  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.364321  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.364331  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.364343  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.364481  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:38.859969  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:38.859993  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.860001  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.860007  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.862084  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:38.862106  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.862116  110500 round_trippers.go:580]     Audit-Id: b5d68d28-294d-4edc-aed1-a7efefc5a6a7
	I0114 10:32:38.862124  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.862131  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.862138  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.862147  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.862156  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.862306  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:38.862890  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:38.862906  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.862915  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.862922  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.864596  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:38.864618  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.864628  110500 round_trippers.go:580]     Audit-Id: 97ff09cc-b23a-4680-b752-7f9598de1f65
	I0114 10:32:38.864635  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.864640  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.864648  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.864654  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.864660  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.864836  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.359064  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:39.359088  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.359097  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.359104  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.361396  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:39.361421  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.361433  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.361442  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.361452  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.361468  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.361477  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.361488  110500 round_trippers.go:580]     Audit-Id: 81698645-fa16-4748-8e8b-746b7500c0b0
	I0114 10:32:39.361597  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:39.362018  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:39.362029  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.362036  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.362042  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.363706  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:39.363724  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.363731  110500 round_trippers.go:580]     Audit-Id: 06179dbb-c491-4fdf-9b46-8b57c30a2a02
	I0114 10:32:39.363736  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.363743  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.363751  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.363762  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.363775  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.363902  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.859598  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:39.859624  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.859633  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.859639  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.861841  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:39.861862  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.861869  110500 round_trippers.go:580]     Audit-Id: 5283f801-29b4-4678-bdb3-c59dd8c322ae
	I0114 10:32:39.861875  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.861891  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.861901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.861915  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.861924  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.862110  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:39.862591  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:39.862647  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.862666  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.862677  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.864481  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:39.864498  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.864508  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.864516  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.864524  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.864533  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.864547  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.864556  110500 round_trippers.go:580]     Audit-Id: 55799b22-9268-4877-b94b-a33177d8cdeb
	I0114 10:32:39.864734  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.865116  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:40.359224  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:40.359246  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.359254  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.359261  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.361570  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:40.361599  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.361606  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.361613  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.361619  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.361625  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.361633  110500 round_trippers.go:580]     Audit-Id: fab59685-da79-42f1-9658-97a80bf226a9
	I0114 10:32:40.361638  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.361768  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:40.362231  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:40.362245  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.362253  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.362259  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.364061  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.364077  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.364084  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.364091  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.364100  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.364111  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.364120  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.364126  110500 round_trippers.go:580]     Audit-Id: 43f7e97f-897f-4c5b-b9e7-c2e06b9b42f4
	I0114 10:32:40.364245  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:40.859884  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:40.859905  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.859913  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.859919  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.861912  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.861938  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.861949  110500 round_trippers.go:580]     Audit-Id: 2c9c969f-0642-4f26-bf0a-d2e8bc6a68ed
	I0114 10:32:40.861959  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.861969  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.861978  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.861990  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.862001  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.862133  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:40.862577  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:40.862590  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.862597  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.862605  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.864355  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.864375  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.864385  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.864393  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.864406  110500 round_trippers.go:580]     Audit-Id: 1820abbb-3ffb-4314-afa4-4789a3f8b5fb
	I0114 10:32:40.864412  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.864419  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.864425  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.864542  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:41.359096  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:41.359121  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.359132  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.359141  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.361299  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.361321  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.361328  110500 round_trippers.go:580]     Audit-Id: 034170e7-b7e3-4243-8ab4-133db6e98d26
	I0114 10:32:41.361334  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.361340  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.361346  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.361367  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.361375  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.361529  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:41.361977  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:41.361989  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.361996  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.362006  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.364080  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.364105  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.364114  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.364123  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.364130  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.364137  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.364145  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.364158  110500 round_trippers.go:580]     Audit-Id: 6c30d9ea-d002-4067-b3f3-a45d3334319b
	I0114 10:32:41.364280  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:41.859939  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:41.859959  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.859967  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.859974  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.862111  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.862128  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.862135  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.862140  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.862145  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.862151  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.862156  110500 round_trippers.go:580]     Audit-Id: 5fab01c3-76e3-4314-902a-ce2b17e158b7
	I0114 10:32:41.862161  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.862272  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:41.862698  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:41.862709  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.862716  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.862722  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.864461  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:41.864481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.864492  110500 round_trippers.go:580]     Audit-Id: 70012846-9522-4e4a-b077-d768ada29a5c
	I0114 10:32:41.864501  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.864509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.864521  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.864529  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.864538  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.864653  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:42.359169  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:42.359192  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.359200  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.359207  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.361441  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:42.361465  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.361475  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.361483  110500 round_trippers.go:580]     Audit-Id: 5c90d803-398e-4d0e-b154-3406a73293ce
	I0114 10:32:42.361492  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.361500  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.361509  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.361521  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.361656  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:42.362144  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:42.362159  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.362166  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.362172  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.363986  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:42.364008  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.364019  110500 round_trippers.go:580]     Audit-Id: 28a92848-4048-4824-9116-41fca3477677
	I0114 10:32:42.364031  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.364042  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.364055  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.364067  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.364078  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.364250  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:42.364584  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:42.859846  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:42.859875  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.859883  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.859890  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.862342  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:42.862362  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.862370  110500 round_trippers.go:580]     Audit-Id: 78bdb26b-a666-4f1d-893c-57b64da2bd73
	I0114 10:32:42.862375  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.862381  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.862386  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.862392  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.862397  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.862507  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:42.862938  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:42.862949  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.862956  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.862963  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.864801  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:42.864822  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.864831  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.864840  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.864849  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.864861  110500 round_trippers.go:580]     Audit-Id: 158eefbb-c63f-4fc0-ae90-e23ca6843f48
	I0114 10:32:42.864877  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.864887  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.864999  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:43.359803  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:43.359828  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.359837  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.359843  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.361988  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:43.362007  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.362014  110500 round_trippers.go:580]     Audit-Id: f05e6b2e-be63-42aa-bf95-7b57c23f420d
	I0114 10:32:43.362020  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.362025  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.362034  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.362039  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.362047  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.362144  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:43.362591  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:43.362603  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.362611  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.362618  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.364316  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:43.364337  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.364347  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.364356  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.364369  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.364378  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.364389  110500 round_trippers.go:580]     Audit-Id: 47c3825f-c4f2-4038-b9c9-ee937c9f14c3
	I0114 10:32:43.364401  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.364527  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:43.859053  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:43.859076  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.859084  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.859090  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.861229  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:43.861248  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.861255  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.861261  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.861272  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.861284  110500 round_trippers.go:580]     Audit-Id: fbebfdc6-d1bc-48b9-b0ed-49434b5c9ab0
	I0114 10:32:43.861294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.861301  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.861450  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:43.861923  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:43.861935  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.861943  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.861949  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.863768  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:43.863788  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.863796  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.863802  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.863807  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.863812  110500 round_trippers.go:580]     Audit-Id: 03d3495e-a450-42d8-ab63-885b6e3ff6e9
	I0114 10:32:43.863821  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.863829  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.863951  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:44.359472  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:44.359499  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.359512  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.359518  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.362100  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:44.362134  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.362144  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.362151  110500 round_trippers.go:580]     Audit-Id: 2ecb38ea-7962-4ce4-9634-c8d41bc36023
	I0114 10:32:44.362156  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.362162  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.362171  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.362184  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.362385  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:44.363138  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:44.363160  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.363172  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.363223  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.365040  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:44.365059  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.365066  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.365071  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.365078  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.365090  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.365102  110500 round_trippers.go:580]     Audit-Id: 504ff71e-4c40-4919-915c-04e52d16b2f0
	I0114 10:32:44.365114  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.365237  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:44.365660  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:44.859846  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:44.859869  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.859880  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.859886  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.862148  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:44.862174  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.862184  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.862193  110500 round_trippers.go:580]     Audit-Id: 1aef2bd1-d91e-4c55-b201-3d8fcb31bef9
	I0114 10:32:44.862202  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.862211  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.862218  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.862227  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.862358  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:44.862889  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:44.862901  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.862911  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.862917  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.864803  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:44.864819  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.864826  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.864831  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.864837  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.864844  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.864853  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.864861  110500 round_trippers.go:580]     Audit-Id: 916976a1-8d50-4315-b550-483b9bc9608b
	I0114 10:32:44.865027  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:45.359632  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:45.359652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.359660  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.359677  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.361927  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.361956  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.361967  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.361976  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.361985  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.362039  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.362051  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.362058  110500 round_trippers.go:580]     Audit-Id: 46f9e573-a43c-4914-bc5c-824c57798d3a
	I0114 10:32:45.362172  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:45.362607  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:45.362618  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.362626  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.362633  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.364421  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:45.364440  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.364449  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.364457  110500 round_trippers.go:580]     Audit-Id: da79f983-cde3-4e4d-8aa4-48f23dd813de
	I0114 10:32:45.364464  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.364472  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.364481  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.364494  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.364597  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:45.859227  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:45.859266  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.859280  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.859290  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.861600  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.861624  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.861634  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.861643  110500 round_trippers.go:580]     Audit-Id: b1855bfd-9d92-4b0c-9fc4-20a77939c6d0
	I0114 10:32:45.861651  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.861659  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.861670  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.861682  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.861828  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:45.862329  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:45.862343  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.862350  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.862357  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.864381  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.864399  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.864406  110500 round_trippers.go:580]     Audit-Id: be0b664f-3541-48f9-af1d-471c790dcf54
	I0114 10:32:45.864412  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.864418  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.864426  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.864434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.864443  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.864564  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.359579  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:46.359602  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.359610  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.359617  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.361864  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:46.361890  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.361901  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.361911  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.361920  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.361932  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.361944  110500 round_trippers.go:580]     Audit-Id: 4b299044-0411-4eef-8466-4e0b7f3f27ab
	I0114 10:32:46.361955  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.362100  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:46.362546  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:46.362557  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.362564  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.362573  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.364372  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:46.364392  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.364401  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.364407  110500 round_trippers.go:580]     Audit-Id: d9cac187-6e34-4ad6-8287-02ab069b2549
	I0114 10:32:46.364417  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.364431  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.364441  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.364454  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.364577  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.859167  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:46.859196  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.859204  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.859211  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.861386  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:46.861412  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.861422  110500 round_trippers.go:580]     Audit-Id: e0167e38-5bf6-4576-abdd-910b23e13cc8
	I0114 10:32:46.861431  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.861438  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.861447  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.861462  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.861470  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.861571  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:46.862026  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:46.862037  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.862044  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.862050  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.863759  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:46.863775  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.863781  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.863787  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.863793  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.863801  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.863809  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.863817  110500 round_trippers.go:580]     Audit-Id: ca4fe996-62ad-474d-848c-ccade570ba3d
	I0114 10:32:46.863978  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.864330  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:47.359737  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:47.359759  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.359768  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.359774  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.362284  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:47.362308  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.362318  110500 round_trippers.go:580]     Audit-Id: 35b47af2-d322-448d-9b6f-19d6f47b8f05
	I0114 10:32:47.362327  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.362335  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.362344  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.362353  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.362366  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.362488  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:47.363062  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:47.363082  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.363092  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.363098  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.365099  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:47.365123  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.365133  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.365144  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.365153  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.365162  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.365170  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.365178  110500 round_trippers.go:580]     Audit-Id: 48f2bdf3-9d93-44a9-a63a-3148dd9812b7
	I0114 10:32:47.365278  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:47.859942  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:47.859965  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.859976  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.859984  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.862271  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:47.862297  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.862308  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.862317  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.862324  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.862329  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.862335  110500 round_trippers.go:580]     Audit-Id: 0634038b-38ea-4702-bfee-cb95338954a7
	I0114 10:32:47.862340  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.862439  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:47.862930  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:47.862946  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.862953  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.862960  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.864674  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:47.864737  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.864809  110500 round_trippers.go:580]     Audit-Id: e1d75173-0a5a-4010-a0d9-5c3a5b9e8a49
	I0114 10:32:47.864828  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.864834  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.864840  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.864848  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.864853  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.864975  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.359325  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:48.359347  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.359359  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.359366  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.361551  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:48.361576  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.361586  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.361595  110500 round_trippers.go:580]     Audit-Id: 8ceb8c5f-cbf9-479b-a2c0-9ce1d42b4db1
	I0114 10:32:48.361604  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.361614  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.361627  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.361641  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.361766  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:48.362247  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:48.362262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.362270  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.362276  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.364040  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:48.364062  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.364079  110500 round_trippers.go:580]     Audit-Id: 44d53443-bee2-482c-b5f3-8914e7fce187
	I0114 10:32:48.364091  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.364105  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.364113  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.364123  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.364132  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.364251  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.859955  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:48.859984  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.859998  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.860008  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.862410  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:48.862438  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.862453  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.862472  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.862479  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.862487  110500 round_trippers.go:580]     Audit-Id: 959b5886-f2da-4e70-bd52-6ce100746f2e
	I0114 10:32:48.862497  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.862509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.862637  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:48.863215  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:48.863232  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.863243  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.863253  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.864931  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:48.864949  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.864959  110500 round_trippers.go:580]     Audit-Id: 98987253-9d9e-433d-856c-fe638637ea02
	I0114 10:32:48.864969  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.864978  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.864987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.864997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.865009  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.865115  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.865421  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:49.359720  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:49.359745  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.359755  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.359766  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.361934  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:49.361955  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.361962  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.361968  110500 round_trippers.go:580]     Audit-Id: 8c97b06b-cb60-4069-83b5-4dee919ddacb
	I0114 10:32:49.361973  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.361979  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.361984  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.361994  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.362122  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:49.362693  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:49.362710  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.362721  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.362731  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.364448  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:49.364474  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.364484  110500 round_trippers.go:580]     Audit-Id: c3b4d31f-3c42-413f-a50e-7006e7195737
	I0114 10:32:49.364491  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.364497  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.364506  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.364511  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.364519  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.364634  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:49.859107  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:49.859143  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.859156  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.859166  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.861351  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:49.861376  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.861387  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.861396  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.861405  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.861413  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.861423  110500 round_trippers.go:580]     Audit-Id: acf80202-d32e-427f-9905-65e7900c3476
	I0114 10:32:49.861430  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.861524  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:49.862006  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:49.862021  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.862033  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.862045  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.863792  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:49.863808  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.863814  110500 round_trippers.go:580]     Audit-Id: 43d97ecd-c9a3-4755-ba76-b682bd120b9a
	I0114 10:32:49.863819  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.863824  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.863830  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.863835  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.863843  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.863918  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:50.359575  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:50.359599  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.359607  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.359614  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.361914  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:50.361936  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.361944  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.361949  110500 round_trippers.go:580]     Audit-Id: 4e315205-0240-45dd-a4d4-efa0c059b803
	I0114 10:32:50.361955  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.361960  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.361968  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.361974  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.362080  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:50.362668  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:50.362684  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.362695  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.362711  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.364414  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:50.364438  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.364449  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.364458  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.364476  110500 round_trippers.go:580]     Audit-Id: ba0c18f4-30aa-46b3-b9be-ce1e31c041f9
	I0114 10:32:50.364482  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.364488  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.364493  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.364600  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:50.859237  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:50.859262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.859271  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.859277  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.861712  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:50.861744  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.861756  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.861766  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.861776  110500 round_trippers.go:580]     Audit-Id: 7dc14a15-e343-4565-bb6d-eaa7202a8b3f
	I0114 10:32:50.861781  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.861787  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.861792  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.861963  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:50.862594  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:50.862612  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.862624  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.862634  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.864616  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:50.864643  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.864650  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.864656  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.864661  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.864666  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.864671  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.864676  110500 round_trippers.go:580]     Audit-Id: 777e0a1b-ac51-4e60-9cf8-245b5a0d6267
	I0114 10:32:50.864792  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:51.359157  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:51.359192  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.359201  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.359208  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.361393  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:51.361418  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.361428  110500 round_trippers.go:580]     Audit-Id: 727ea24c-1766-4936-b762-2c67365137af
	I0114 10:32:51.361436  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.361444  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.361457  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.361466  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.361476  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.361596  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:51.362079  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:51.362092  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.362102  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.362111  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.363956  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:51.363980  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.363990  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.363999  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.364012  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.364021  110500 round_trippers.go:580]     Audit-Id: 92c74176-28e0-41c0-ad03-3dd4ad01620d
	I0114 10:32:51.364033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.364041  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.364146  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:51.364451  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:51.859788  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:51.859810  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.859819  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.859826  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.862051  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:51.862083  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.862098  110500 round_trippers.go:580]     Audit-Id: 5f8c387a-66e7-4893-b486-51686e6f4c1b
	I0114 10:32:51.862108  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.862119  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.862134  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.862143  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.862156  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.862263  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:51.862710  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:51.862724  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.862745  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.862756  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.864464  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:51.864481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.864487  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.864493  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.864498  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.864503  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.864508  110500 round_trippers.go:580]     Audit-Id: fbbfa460-93b1-4017-b7db-12d7f5dd096a
	I0114 10:32:51.864515  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.864633  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:52.359184  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:52.359216  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.359224  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.359231  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.361569  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:52.361590  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.361596  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.361602  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.361608  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.361617  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.361624  110500 round_trippers.go:580]     Audit-Id: 46cd87ca-c92a-40e7-9abc-58fb1d1da845
	I0114 10:32:52.361633  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.361785  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:52.362272  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:52.362293  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.362300  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.362306  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.364201  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:52.364225  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.364240  110500 round_trippers.go:580]     Audit-Id: 3bfff669-7a3d-4f5a-ba7a-cb801f029ad5
	I0114 10:32:52.364246  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.364252  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.364257  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.364263  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.364268  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.364382  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:52.859017  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:52.859043  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.859053  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.859061  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.861311  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:52.861335  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.861346  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.861354  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.861362  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.861370  110500 round_trippers.go:580]     Audit-Id: 6fa55034-e1ef-4b07-869f-e699f2e6ad9b
	I0114 10:32:52.861379  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.861397  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.861538  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:52.862025  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:52.862039  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.862047  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.862054  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.863574  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:52.863592  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.863601  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.863611  110500 round_trippers.go:580]     Audit-Id: 551ef0e8-c97f-4844-9422-c5752cb489bd
	I0114 10:32:52.863619  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.863627  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.863636  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.863646  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.863779  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.359608  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:53.359632  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.359645  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.359652  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.361956  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:53.361979  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.361987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.361993  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.361998  110500 round_trippers.go:580]     Audit-Id: 060b65f4-4c9b-400d-923e-42224d7765d1
	I0114 10:32:53.362003  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.362008  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.362013  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.362104  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:53.362577  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.362593  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.362603  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.362610  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.364347  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.364377  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.364386  110500 round_trippers.go:580]     Audit-Id: 97269ed2-d9cf-4bae-ae56-b417a88fc922
	I0114 10:32:53.364392  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.364398  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.364419  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.364430  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.364435  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.364547  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.364848  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:53.859079  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:53.859100  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.859108  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.859114  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.861340  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:53.861359  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.861366  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.861371  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.861377  110500 round_trippers.go:580]     Audit-Id: 4dac6af2-31c0-4d5b-ad54-0b90bc13b7f1
	I0114 10:32:53.861382  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.861387  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.861392  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.861477  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6542 chars]
	I0114 10:32:53.861953  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.861968  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.861976  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.861982  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.863752  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.863776  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.863786  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.863795  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.863803  110500 round_trippers.go:580]     Audit-Id: 3d572e93-949f-499a-a293-4ddb3e2a2d6d
	I0114 10:32:53.863812  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.863821  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.863829  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.863946  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.864233  110500 pod_ready.go:92] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.864250  110500 pod_ready.go:81] duration metric: took 16.39921044s waiting for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.864258  110500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.864298  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:53.864306  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.864313  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.864318  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.865875  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.865896  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.865905  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.865916  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.865925  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.865938  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.865949  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.865961  110500 round_trippers.go:580]     Audit-Id: 82f791e6-7642-40ac-a8b0-fb679511ec02
	I0114 10:32:53.866052  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"777","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6035 chars]
	I0114 10:32:53.866410  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.866422  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.866429  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.866436  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.867777  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.867797  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.867807  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.867816  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.867825  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.867838  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.867848  110500 round_trippers.go:580]     Audit-Id: 058bebb4-b275-46c2-9a74-1b5ca44db29a
	I0114 10:32:53.867861  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.867985  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.868296  110500 pod_ready.go:92] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.868309  110500 pod_ready.go:81] duration metric: took 4.045313ms waiting for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.868324  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.868372  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102822
	I0114 10:32:53.868380  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.868387  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.868394  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.870207  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.870223  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.870234  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.870243  110500 round_trippers.go:580]     Audit-Id: 1d446373-a202-405c-b237-c6843904c253
	I0114 10:32:53.870261  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.870270  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.870283  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.870295  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.870409  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102822","namespace":"kube-system","uid":"c74a88c9-d603-4a80-a194-de75c8d0a3a5","resourceVersion":"770","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.mirror":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.seen":"2023-01-14T10:28:42.123458577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8421 chars]
	I0114 10:32:53.870795  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.870809  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.870819  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.870828  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.872366  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.872389  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.872399  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.872408  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.872424  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.872434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.872447  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.872460  110500 round_trippers.go:580]     Audit-Id: 472b3cbd-28d7-44ac-a15d-56c46a3c4908
	I0114 10:32:53.872539  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.872804  110500 pod_ready.go:92] pod "kube-apiserver-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.872817  110500 pod_ready.go:81] duration metric: took 4.480396ms waiting for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.872828  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.872870  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102822
	I0114 10:32:53.872880  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.872890  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.872900  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.874460  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.874482  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.874490  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.874497  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.874504  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.874510  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.874515  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.874520  110500 round_trippers.go:580]     Audit-Id: c397a327-3c31-4fec-abe6-15a2e07084e1
	I0114 10:32:53.874669  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102822","namespace":"kube-system","uid":"85c8a264-96f3-4fcf-affd-917b94bdd177","resourceVersion":"774","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.mirror":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.seen":"2023-01-14T10:28:42.123460297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7996 chars]
	I0114 10:32:53.875051  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.875063  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.875070  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.875077  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.876617  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.876697  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.876721  110500 round_trippers.go:580]     Audit-Id: fec4f21d-1840-4d29-b02a-62b90536fe0e
	I0114 10:32:53.876728  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.876733  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.876741  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.876747  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.876754  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.876851  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.877124  110500 pod_ready.go:92] pod "kube-controller-manager-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.877137  110500 pod_ready.go:81] duration metric: took 4.30113ms waiting for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.877149  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.877191  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d5n6
	I0114 10:32:53.877201  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.877219  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.877234  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.878711  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.878727  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.878734  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.878742  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.878750  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.878777  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.878783  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.878789  110500 round_trippers.go:580]     Audit-Id: 84194089-138c-4790-b159-7929e98278bb
	I0114 10:32:53.878874  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4d5n6","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dba561b-e827-4a6e-afd9-11c68b7e4447","resourceVersion":"471","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5522 chars]
	I0114 10:32:53.879244  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:32:53.879258  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.879265  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.879271  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.880734  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.880759  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.880769  110500 round_trippers.go:580]     Audit-Id: f14db8a2-ec1f-4484-a93c-8313811b037d
	I0114 10:32:53.880779  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.880788  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.880796  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.880802  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.880808  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.880892  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4","resourceVersion":"549","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1" [truncated 4430 chars]
	I0114 10:32:53.881144  110500 pod_ready.go:92] pod "kube-proxy-4d5n6" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.881161  110500 pod_ready.go:81] duration metric: took 4.002111ms waiting for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.881169  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.059627  110500 request.go:614] Waited for 178.374902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:54.059726  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:54.059741  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.059754  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.059768  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.061871  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.061896  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.061908  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.061916  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.061923  110500 round_trippers.go:580]     Audit-Id: aed13262-a010-46cc-af38-a5bd25ab0d48
	I0114 10:32:54.061933  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.061944  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.061957  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.062158  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bzd24","generateName":"kube-proxy-","namespace":"kube-system","uid":"3191f786-4823-486a-90e6-be1b1180c23a","resourceVersion":"660","creationTimestamp":"2023-01-14T10:30:20Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 10:32:54.259954  110500 request.go:614] Waited for 197.349508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:54.260021  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:54.260027  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.260035  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.260044  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.261934  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:54.261954  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.261961  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.261967  110500 round_trippers.go:580]     Audit-Id: 4baf5a74-b2c4-429b-8b3c-4634d55c2954
	I0114 10:32:54.261972  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.261977  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.261983  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.261991  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.262089  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m03","uid":"7fb9d125-0cce-4853-b9a9-9348e20e7ae7","resourceVersion":"674","creationTimestamp":"2023-01-14T10:31:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu
mes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{"." [truncated 4248 chars]
	I0114 10:32:54.262369  110500 pod_ready.go:92] pod "kube-proxy-bzd24" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:54.262382  110500 pod_ready.go:81] duration metric: took 381.20861ms waiting for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.262392  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.459841  110500 request.go:614] Waited for 197.376659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:54.459903  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:54.459909  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.459917  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.459930  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.462326  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.462351  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.462362  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.462371  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.462381  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.462394  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.462407  110500 round_trippers.go:580]     Audit-Id: d19a543b-1f00-4a60-a98d-7c9d97051362
	I0114 10:32:54.462420  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.462560  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlcll","generateName":"kube-proxy-","namespace":"kube-system","uid":"91e05737-5cbf-404c-8b7c-75045f584885","resourceVersion":"718","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5727 chars]
	I0114 10:32:54.659432  110500 request.go:614] Waited for 196.351122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:54.659482  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:54.659487  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.659495  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.659501  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.661545  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.661569  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.661580  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.661589  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.661598  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.661610  110500 round_trippers.go:580]     Audit-Id: e4c1d17c-ce25-4fe7-a6f5-b07ee5fc48ab
	I0114 10:32:54.661624  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.661635  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.661721  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:54.662011  110500 pod_ready.go:92] pod "kube-proxy-qlcll" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:54.662023  110500 pod_ready.go:81] duration metric: took 399.620307ms waiting for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.662034  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.859494  110500 request.go:614] Waited for 197.400011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:54.859552  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:54.859557  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.859564  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.859571  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.861821  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.861852  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.861865  110500 round_trippers.go:580]     Audit-Id: ccc6f823-3989-4b53-8ff0-d1337b0b0a61
	I0114 10:32:54.861874  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.861886  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.861899  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.861909  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.861919  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.862042  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102822","namespace":"kube-system","uid":"63ee442e-88de-44d9-8512-98c56f1b4942","resourceVersion":"725","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.mirror":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.seen":"2023-01-14T10:28:42.123461701Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4878 chars]
	I0114 10:32:55.059823  110500 request.go:614] Waited for 197.354082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:55.059872  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:55.059876  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.059884  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.059897  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.062004  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:55.062027  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.062038  110500 round_trippers.go:580]     Audit-Id: d564f563-4343-426b-a870-97784639d546
	I0114 10:32:55.062046  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.062056  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.062064  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.062073  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.062086  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.062203  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:55.062558  110500 pod_ready.go:92] pod "kube-scheduler-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:55.062577  110500 pod_ready.go:81] duration metric: took 400.532751ms waiting for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:55.062591  110500 pod_ready.go:38] duration metric: took 17.803879464s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:55.062613  110500 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:32:55.062662  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:55.072334  110500 command_runner.go:130] > 1113
	I0114 10:32:55.072394  110500 api_server.go:71] duration metric: took 18.002517039s to wait for apiserver process to appear ...
	I0114 10:32:55.072408  110500 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:32:55.072418  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:55.077405  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:32:55.077450  110500 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0114 10:32:55.077454  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.077462  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.077468  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.078157  110500 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0114 10:32:55.078182  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.078189  110500 round_trippers.go:580]     Audit-Id: adb19759-4de9-46d5-95cf-8b480d9bd7f5
	I0114 10:32:55.078195  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.078200  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.078207  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.078213  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.078219  110500 round_trippers.go:580]     Content-Length: 263
	I0114 10:32:55.078224  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.078239  110500 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 10:32:55.078279  110500 api_server.go:140] control plane version: v1.25.3
	I0114 10:32:55.078291  110500 api_server.go:130] duration metric: took 5.878523ms to wait for apiserver health ...
	I0114 10:32:55.078298  110500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:32:55.259707  110500 request.go:614] Waited for 181.324482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.259776  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.259781  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.259791  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.259800  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.263248  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:55.263272  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.263280  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.263286  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.263292  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.263297  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.263303  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.263310  110500 round_trippers.go:580]     Audit-Id: 8a596230-8294-46c4-b65b-17375baa5d42
	I0114 10:32:55.263957  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84909 chars]
	I0114 10:32:55.266933  110500 system_pods.go:59] 12 kube-system pods found
	I0114 10:32:55.266958  110500 system_pods.go:61] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:55.266964  110500 system_pods.go:61] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running
	I0114 10:32:55.266968  110500 system_pods.go:61] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:55.266972  110500 system_pods.go:61] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:55.266976  110500 system_pods.go:61] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running
	I0114 10:32:55.266980  110500 system_pods.go:61] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:55.266986  110500 system_pods.go:61] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running
	I0114 10:32:55.266993  110500 system_pods.go:61] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:55.266998  110500 system_pods.go:61] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:55.267003  110500 system_pods.go:61] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:55.267007  110500 system_pods.go:61] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:55.267014  110500 system_pods.go:61] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:32:55.267026  110500 system_pods.go:74] duration metric: took 188.723685ms to wait for pod list to return data ...
	I0114 10:32:55.267033  110500 default_sa.go:34] waiting for default service account to be created ...
	I0114 10:32:55.459429  110500 request.go:614] Waited for 192.340757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0114 10:32:55.459508  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0114 10:32:55.459520  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.459532  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.459547  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.461495  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:55.461513  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.461520  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.461526  110500 round_trippers.go:580]     Audit-Id: 29aeece9-a829-4503-b39f-8d5844636b92
	I0114 10:32:55.461531  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.461536  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.461541  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.461547  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.461552  110500 round_trippers.go:580]     Content-Length: 261
	I0114 10:32:55.461569  110500 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ed5470f9-ec28-44cb-ac49-0dbdbeab7993","resourceVersion":"329","creationTimestamp":"2023-01-14T10:28:55Z"}}]}
	I0114 10:32:55.461729  110500 default_sa.go:45] found service account: "default"
	I0114 10:32:55.461745  110500 default_sa.go:55] duration metric: took 194.706664ms for default service account to be created ...
	I0114 10:32:55.461752  110500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 10:32:55.659076  110500 request.go:614] Waited for 197.269144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.659133  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.659137  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.659145  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.659152  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.662730  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:55.662755  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.662770  110500 round_trippers.go:580]     Audit-Id: d073e5a7-e138-4f76-b486-e769fbc5f5e6
	I0114 10:32:55.662778  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.662786  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.662794  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.662804  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.662817  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.663478  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84909 chars]
	I0114 10:32:55.666072  110500 system_pods.go:86] 12 kube-system pods found
	I0114 10:32:55.666095  110500 system_pods.go:89] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:55.666101  110500 system_pods.go:89] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running
	I0114 10:32:55.666106  110500 system_pods.go:89] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:55.666111  110500 system_pods.go:89] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:55.666116  110500 system_pods.go:89] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running
	I0114 10:32:55.666123  110500 system_pods.go:89] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:55.666132  110500 system_pods.go:89] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running
	I0114 10:32:55.666138  110500 system_pods.go:89] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:55.666145  110500 system_pods.go:89] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:55.666149  110500 system_pods.go:89] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:55.666153  110500 system_pods.go:89] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:55.666163  110500 system_pods.go:89] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:32:55.666171  110500 system_pods.go:126] duration metric: took 204.414948ms to wait for k8s-apps to be running ...
	I0114 10:32:55.666181  110500 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 10:32:55.666219  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:32:55.675950  110500 system_svc.go:56] duration metric: took 9.754834ms WaitForService to wait for kubelet.
	I0114 10:32:55.675980  110500 kubeadm.go:573] duration metric: took 18.606132423s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 10:32:55.675999  110500 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:32:55.859434  110500 request.go:614] Waited for 183.362713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:55.859502  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:55.859514  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.859522  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.859528  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.862021  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:55.862052  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.862064  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.862073  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.862082  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.862095  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.862111  110500 round_trippers.go:580]     Audit-Id: 6dd48a4e-f50e-492a-8dff-09e669805baa
	I0114 10:32:55.862119  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.862314  110500 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15962 chars]
	I0114 10:32:55.862905  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862918  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862930  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862936  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862941  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862945  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862952  110500 node_conditions.go:105] duration metric: took 186.947551ms to run NodePressure ...
	I0114 10:32:55.862961  110500 start.go:217] waiting for startup goroutines ...
	I0114 10:32:55.863404  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:55.863497  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:55.866941  110500 out.go:177] * Starting worker node multinode-102822-m02 in cluster multinode-102822
	I0114 10:32:55.868288  110500 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:32:55.869765  110500 out.go:177] * Pulling base image ...
	I0114 10:32:55.871152  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:32:55.871175  110500 cache.go:57] Caching tarball of preloaded images
	I0114 10:32:55.871229  110500 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:32:55.871304  110500 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:32:55.871330  110500 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:32:55.871441  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:55.893561  110500 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:32:55.893584  110500 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:32:55.893609  110500 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:32:55.893646  110500 start.go:364] acquiring machines lock for multinode-102822-m02: {Name:mk25af419661492cbd58b718b64b51677c98136a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:32:55.893781  110500 start.go:368] acquired machines lock for "multinode-102822-m02" in 104.709µs
	I0114 10:32:55.893802  110500 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:32:55.893807  110500 fix.go:55] fixHost starting: m02
	I0114 10:32:55.894020  110500 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:32:55.917751  110500 fix.go:103] recreateIfNeeded on multinode-102822-m02: state=Stopped err=<nil>
	W0114 10:32:55.917777  110500 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:32:55.920259  110500 out.go:177] * Restarting existing docker container for "multinode-102822-m02" ...
	I0114 10:32:55.921900  110500 cli_runner.go:164] Run: docker start multinode-102822-m02
	I0114 10:32:56.303574  110500 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:32:56.328701  110500 kic.go:426] container "multinode-102822-m02" state is running.
	I0114 10:32:56.329001  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:32:56.353818  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:56.354054  110500 machine.go:88] provisioning docker machine ...
	I0114 10:32:56.354080  110500 ubuntu.go:169] provisioning hostname "multinode-102822-m02"
	I0114 10:32:56.354126  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:56.378925  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:56.379088  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32872 <nil> <nil>}
	I0114 10:32:56.379107  110500 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102822-m02 && echo "multinode-102822-m02" | sudo tee /etc/hostname
	I0114 10:32:56.379767  110500 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47252->127.0.0.1:32872: read: connection reset by peer
	I0114 10:32:59.504292  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102822-m02
	
	I0114 10:32:59.504372  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.529104  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:59.529255  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32872 <nil> <nil>}
	I0114 10:32:59.529273  110500 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102822-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102822-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102822-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:32:59.643446  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:32:59.643478  110500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:32:59.643495  110500 ubuntu.go:177] setting up certificates
	I0114 10:32:59.643503  110500 provision.go:83] configureAuth start
	I0114 10:32:59.643550  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:32:59.666898  110500 provision.go:138] copyHostCerts
	I0114 10:32:59.666931  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:59.666953  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:32:59.666961  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:59.667021  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:32:59.667087  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:59.667104  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:32:59.667109  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:59.667132  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:32:59.667170  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:59.667183  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:32:59.667189  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:59.667207  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:32:59.667255  110500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.multinode-102822-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-102822-m02]
	I0114 10:32:59.772545  110500 provision.go:172] copyRemoteCerts
	I0114 10:32:59.772598  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:32:59.772629  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.795246  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:32:59.879398  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 10:32:59.879459  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:32:59.896300  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 10:32:59.896363  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0114 10:32:59.913524  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 10:32:59.913588  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:32:59.930401  110500 provision.go:86] duration metric: configureAuth took 286.883524ms
	I0114 10:32:59.930432  110500 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:32:59.930616  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:59.930627  110500 machine.go:91] provisioned docker machine in 3.576558371s
	I0114 10:32:59.930634  110500 start.go:300] post-start starting for "multinode-102822-m02" (driver="docker")
	I0114 10:32:59.930640  110500 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:32:59.930681  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:32:59.930713  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.954609  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.039058  110500 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:33:00.041623  110500 command_runner.go:130] > NAME="Ubuntu"
	I0114 10:33:00.041638  110500 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 10:33:00.041643  110500 command_runner.go:130] > ID=ubuntu
	I0114 10:33:00.041652  110500 command_runner.go:130] > ID_LIKE=debian
	I0114 10:33:00.041659  110500 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 10:33:00.041669  110500 command_runner.go:130] > VERSION_ID="20.04"
	I0114 10:33:00.041686  110500 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 10:33:00.041695  110500 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 10:33:00.041702  110500 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 10:33:00.041715  110500 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 10:33:00.041722  110500 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 10:33:00.041727  110500 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 10:33:00.041809  110500 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:33:00.041826  110500 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:33:00.041837  110500 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:33:00.041848  110500 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:33:00.041863  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:33:00.041918  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:33:00.042001  110500 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:33:00.042015  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /etc/ssl/certs/103062.pem
	I0114 10:33:00.042098  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:33:00.048428  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:33:00.065256  110500 start.go:303] post-start completed in 134.608719ms
	I0114 10:33:00.065333  110500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:33:00.065370  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.089347  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.171707  110500 command_runner.go:130] > 18%
	I0114 10:33:00.171953  110500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:33:00.175843  110500 command_runner.go:130] > 239G
	I0114 10:33:00.175869  110500 fix.go:57] fixHost completed within 4.282059343s
	I0114 10:33:00.175880  110500 start.go:83] releasing machines lock for "multinode-102822-m02", held for 4.282085064s
	I0114 10:33:00.175958  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:33:00.204131  110500 out.go:177] * Found network options:
	I0114 10:33:00.205771  110500 out.go:177]   - NO_PROXY=192.168.58.2
	W0114 10:33:00.207135  110500 proxy.go:119] fail to check proxy env: Error ip not in block
	W0114 10:33:00.207169  110500 proxy.go:119] fail to check proxy env: Error ip not in block
	I0114 10:33:00.207243  110500 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:33:00.207283  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.207345  110500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:33:00.207411  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.233561  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.237305  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.349674  110500 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 10:33:00.349754  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:33:00.358943  110500 docker.go:189] disabling docker service ...
	I0114 10:33:00.358987  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:33:00.368539  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:33:00.377330  110500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:33:00.458087  110500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:33:00.532879  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:33:00.542016  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:33:00.553731  110500 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:33:00.553758  110500 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:33:00.554499  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:33:00.562521  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:33:00.570518  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:33:00.578409  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:33:00.586533  110500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:33:00.593065  110500 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0114 10:33:00.593116  110500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:33:00.599333  110500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:33:00.669768  110500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:33:00.743175  110500 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:33:00.743240  110500 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:33:00.746690  110500 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0114 10:33:00.746715  110500 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 10:33:00.746721  110500 command_runner.go:130] > Device: fch/252d	Inode: 118         Links: 1
	I0114 10:33:00.746728  110500 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:33:00.746734  110500 command_runner.go:130] > Access: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746739  110500 command_runner.go:130] > Modify: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746744  110500 command_runner.go:130] > Change: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746750  110500 command_runner.go:130] >  Birth: -
	I0114 10:33:00.746772  110500 start.go:472] Will wait 60s for crictl version
	I0114 10:33:00.746812  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:33:00.749846  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:33:00.749902  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:33:00.774327  110500 command_runner.go:130] ! time="2023-01-14T10:33:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:33:00.774386  110500 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:33:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:33:15.181491  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:33:15.203527  110500 command_runner.go:130] > Version:  0.1.0
	I0114 10:33:15.203557  110500 command_runner.go:130] > RuntimeName:  containerd
	I0114 10:33:15.203564  110500 command_runner.go:130] > RuntimeVersion:  1.6.10
	I0114 10:33:15.203572  110500 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0114 10:33:15.203593  110500 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:33:15.203645  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:33:15.225426  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:33:15.226779  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:33:15.248561  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:33:15.252201  110500 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:33:15.253862  110500 out.go:177]   - env NO_PROXY=192.168.58.2
	I0114 10:33:15.255234  110500 cli_runner.go:164] Run: docker network inspect multinode-102822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:33:15.278552  110500 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0114 10:33:15.281742  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:33:15.290843  110500 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822 for IP: 192.168.58.3
	I0114 10:33:15.290938  110500 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:33:15.290983  110500 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:33:15.291001  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 10:33:15.291018  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 10:33:15.291034  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:33:15.291044  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:33:15.291086  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:33:15.291122  110500 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:33:15.291137  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:33:15.291172  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:33:15.291200  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:33:15.291232  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:33:15.291294  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:33:15.291328  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem -> /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.291340  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.291350  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.291733  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:33:15.308950  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:33:15.326682  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:33:15.343586  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:33:15.360810  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:33:15.378267  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:33:15.394623  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:33:15.411806  110500 ssh_runner.go:195] Run: openssl version
	I0114 10:33:15.416453  110500 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 10:33:15.416527  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:33:15.423431  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426436  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426476  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426513  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.431093  110500 command_runner.go:130] > 51391683
	I0114 10:33:15.431280  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:33:15.438119  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:33:15.445156  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448124  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448172  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448210  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.452776  110500 command_runner.go:130] > 3ec20f2e
	I0114 10:33:15.452816  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:33:15.459313  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:33:15.466112  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.468925  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.469032  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.469077  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.473645  110500 command_runner.go:130] > b5213941
	I0114 10:33:15.473797  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:33:15.480540  110500 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:33:15.504106  110500 command_runner.go:130] > {
	I0114 10:33:15.504131  110500 command_runner.go:130] >   "status": {
	I0114 10:33:15.504139  110500 command_runner.go:130] >     "conditions": [
	I0114 10:33:15.504148  110500 command_runner.go:130] >       {
	I0114 10:33:15.504158  110500 command_runner.go:130] >         "type": "RuntimeReady",
	I0114 10:33:15.504166  110500 command_runner.go:130] >         "status": true,
	I0114 10:33:15.504173  110500 command_runner.go:130] >         "reason": "",
	I0114 10:33:15.504181  110500 command_runner.go:130] >         "message": ""
	I0114 10:33:15.504191  110500 command_runner.go:130] >       },
	I0114 10:33:15.504197  110500 command_runner.go:130] >       {
	I0114 10:33:15.504207  110500 command_runner.go:130] >         "type": "NetworkReady",
	I0114 10:33:15.504216  110500 command_runner.go:130] >         "status": true,
	I0114 10:33:15.504226  110500 command_runner.go:130] >         "reason": "",
	I0114 10:33:15.504245  110500 command_runner.go:130] >         "message": ""
	I0114 10:33:15.504252  110500 command_runner.go:130] >       }
	I0114 10:33:15.504255  110500 command_runner.go:130] >     ]
	I0114 10:33:15.504259  110500 command_runner.go:130] >   },
	I0114 10:33:15.504263  110500 command_runner.go:130] >   "cniconfig": {
	I0114 10:33:15.504267  110500 command_runner.go:130] >     "PluginDirs": [
	I0114 10:33:15.504272  110500 command_runner.go:130] >       "/opt/cni/bin"
	I0114 10:33:15.504276  110500 command_runner.go:130] >     ],
	I0114 10:33:15.504281  110500 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.mk",
	I0114 10:33:15.504286  110500 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0114 10:33:15.504290  110500 command_runner.go:130] >     "Prefix": "eth",
	I0114 10:33:15.504295  110500 command_runner.go:130] >     "Networks": [
	I0114 10:33:15.504299  110500 command_runner.go:130] >       {
	I0114 10:33:15.504306  110500 command_runner.go:130] >         "Config": {
	I0114 10:33:15.504311  110500 command_runner.go:130] >           "Name": "cni-loopback",
	I0114 10:33:15.504319  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:33:15.504325  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:33:15.504329  110500 command_runner.go:130] >             {
	I0114 10:33:15.504334  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504341  110500 command_runner.go:130] >                 "type": "loopback",
	I0114 10:33:15.504345  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:33:15.504352  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504356  110500 command_runner.go:130] >               },
	I0114 10:33:15.504361  110500 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0114 10:33:15.504367  110500 command_runner.go:130] >             }
	I0114 10:33:15.504370  110500 command_runner.go:130] >           ],
	I0114 10:33:15.504382  110500 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0114 10:33:15.504386  110500 command_runner.go:130] >         },
	I0114 10:33:15.504391  110500 command_runner.go:130] >         "IFName": "lo"
	I0114 10:33:15.504397  110500 command_runner.go:130] >       },
	I0114 10:33:15.504400  110500 command_runner.go:130] >       {
	I0114 10:33:15.504406  110500 command_runner.go:130] >         "Config": {
	I0114 10:33:15.504413  110500 command_runner.go:130] >           "Name": "kindnet",
	I0114 10:33:15.504419  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:33:15.504424  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:33:15.504431  110500 command_runner.go:130] >             {
	I0114 10:33:15.504435  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504442  110500 command_runner.go:130] >                 "type": "ptp",
	I0114 10:33:15.504446  110500 command_runner.go:130] >                 "ipam": {
	I0114 10:33:15.504452  110500 command_runner.go:130] >                   "type": "host-local"
	I0114 10:33:15.504456  110500 command_runner.go:130] >                 },
	I0114 10:33:15.504460  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504466  110500 command_runner.go:130] >               },
	I0114 10:33:15.504480  110500 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.1.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0114 10:33:15.504488  110500 command_runner.go:130] >             },
	I0114 10:33:15.504492  110500 command_runner.go:130] >             {
	I0114 10:33:15.504499  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504503  110500 command_runner.go:130] >                 "type": "portmap",
	I0114 10:33:15.504510  110500 command_runner.go:130] >                 "capabilities": {
	I0114 10:33:15.504515  110500 command_runner.go:130] >                   "portMappings": true
	I0114 10:33:15.504521  110500 command_runner.go:130] >                 },
	I0114 10:33:15.504527  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:33:15.504535  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504540  110500 command_runner.go:130] >               },
	I0114 10:33:15.504550  110500 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0114 10:33:15.504554  110500 command_runner.go:130] >             }
	I0114 10:33:15.504561  110500 command_runner.go:130] >           ],
	I0114 10:33:15.504591  110500 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.1.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0114 10:33:15.504601  110500 command_runner.go:130] >         },
	I0114 10:33:15.504605  110500 command_runner.go:130] >         "IFName": "eth0"
	I0114 10:33:15.504609  110500 command_runner.go:130] >       }
	I0114 10:33:15.504612  110500 command_runner.go:130] >     ]
	I0114 10:33:15.504618  110500 command_runner.go:130] >   },
	I0114 10:33:15.504622  110500 command_runner.go:130] >   "config": {
	I0114 10:33:15.504626  110500 command_runner.go:130] >     "containerd": {
	I0114 10:33:15.504631  110500 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0114 10:33:15.504637  110500 command_runner.go:130] >       "defaultRuntimeName": "default",
	I0114 10:33:15.504641  110500 command_runner.go:130] >       "defaultRuntime": {
	I0114 10:33:15.504649  110500 command_runner.go:130] >         "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504654  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:33:15.504658  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:33:15.504665  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:33:15.504670  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:33:15.504674  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:33:15.504681  110500 command_runner.go:130] >         "options": null,
	I0114 10:33:15.504688  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:33:15.504697  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:33:15.504704  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:33:15.504715  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:33:15.504723  110500 command_runner.go:130] >       },
	I0114 10:33:15.504730  110500 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0114 10:33:15.504740  110500 command_runner.go:130] >         "runtimeType": "",
	I0114 10:33:15.504751  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:33:15.504757  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:33:15.504766  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:33:15.504772  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:33:15.504781  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:33:15.504788  110500 command_runner.go:130] >         "options": null,
	I0114 10:33:15.504800  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:33:15.504810  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:33:15.504818  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:33:15.504823  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:33:15.504829  110500 command_runner.go:130] >       },
	I0114 10:33:15.504833  110500 command_runner.go:130] >       "runtimes": {
	I0114 10:33:15.504839  110500 command_runner.go:130] >         "default": {
	I0114 10:33:15.504844  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504850  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:33:15.504854  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:33:15.504859  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:33:15.504865  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:33:15.504869  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:33:15.504876  110500 command_runner.go:130] >           "options": null,
	I0114 10:33:15.504884  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:33:15.504891  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:33:15.504896  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:33:15.504903  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:33:15.504907  110500 command_runner.go:130] >         },
	I0114 10:33:15.504914  110500 command_runner.go:130] >         "runc": {
	I0114 10:33:15.504919  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504926  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:33:15.504930  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:33:15.504937  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:33:15.504942  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:33:15.504949  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:33:15.504953  110500 command_runner.go:130] >           "options": {
	I0114 10:33:15.504968  110500 command_runner.go:130] >             "SystemdCgroup": false
	I0114 10:33:15.504974  110500 command_runner.go:130] >           },
	I0114 10:33:15.504980  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:33:15.504985  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:33:15.504991  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:33:15.504997  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:33:15.505001  110500 command_runner.go:130] >         }
	I0114 10:33:15.505005  110500 command_runner.go:130] >       },
	I0114 10:33:15.505011  110500 command_runner.go:130] >       "noPivot": false,
	I0114 10:33:15.505016  110500 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0114 10:33:15.505024  110500 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0114 10:33:15.505029  110500 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0114 10:33:15.505035  110500 command_runner.go:130] >     },
	I0114 10:33:15.505039  110500 command_runner.go:130] >     "cni": {
	I0114 10:33:15.505047  110500 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0114 10:33:15.505053  110500 command_runner.go:130] >       "confDir": "/etc/cni/net.mk",
	I0114 10:33:15.505057  110500 command_runner.go:130] >       "maxConfNum": 1,
	I0114 10:33:15.505065  110500 command_runner.go:130] >       "confTemplate": "",
	I0114 10:33:15.505070  110500 command_runner.go:130] >       "ipPref": ""
	I0114 10:33:15.505077  110500 command_runner.go:130] >     },
	I0114 10:33:15.505081  110500 command_runner.go:130] >     "registry": {
	I0114 10:33:15.505088  110500 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0114 10:33:15.505092  110500 command_runner.go:130] >       "mirrors": null,
	I0114 10:33:15.505099  110500 command_runner.go:130] >       "configs": null,
	I0114 10:33:15.505103  110500 command_runner.go:130] >       "auths": null,
	I0114 10:33:15.505109  110500 command_runner.go:130] >       "headers": null
	I0114 10:33:15.505114  110500 command_runner.go:130] >     },
	I0114 10:33:15.505120  110500 command_runner.go:130] >     "imageDecryption": {
	I0114 10:33:15.505124  110500 command_runner.go:130] >       "keyModel": "node"
	I0114 10:33:15.505130  110500 command_runner.go:130] >     },
	I0114 10:33:15.505134  110500 command_runner.go:130] >     "disableTCPService": true,
	I0114 10:33:15.505141  110500 command_runner.go:130] >     "streamServerAddress": "",
	I0114 10:33:15.505145  110500 command_runner.go:130] >     "streamServerPort": "10010",
	I0114 10:33:15.505150  110500 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0114 10:33:15.505154  110500 command_runner.go:130] >     "enableSelinux": false,
	I0114 10:33:15.505159  110500 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0114 10:33:15.505165  110500 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.8",
	I0114 10:33:15.505169  110500 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0114 10:33:15.505176  110500 command_runner.go:130] >     "systemdCgroup": false,
	I0114 10:33:15.505180  110500 command_runner.go:130] >     "enableTLSStreaming": false,
	I0114 10:33:15.505187  110500 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0114 10:33:15.505192  110500 command_runner.go:130] >       "tlsCertFile": "",
	I0114 10:33:15.505198  110500 command_runner.go:130] >       "tlsKeyFile": ""
	I0114 10:33:15.505202  110500 command_runner.go:130] >     },
	I0114 10:33:15.505207  110500 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0114 10:33:15.505214  110500 command_runner.go:130] >     "disableCgroup": false,
	I0114 10:33:15.505218  110500 command_runner.go:130] >     "disableApparmor": false,
	I0114 10:33:15.505228  110500 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0114 10:33:15.505238  110500 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0114 10:33:15.505243  110500 command_runner.go:130] >     "disableProcMount": false,
	I0114 10:33:15.505247  110500 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0114 10:33:15.505252  110500 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0114 10:33:15.505257  110500 command_runner.go:130] >     "disableHugetlbController": true,
	I0114 10:33:15.505266  110500 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0114 10:33:15.505271  110500 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0114 10:33:15.505278  110500 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0114 10:33:15.505283  110500 command_runner.go:130] >     "enableUnprivilegedPorts": false,
	I0114 10:33:15.505292  110500 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0114 10:33:15.505298  110500 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0114 10:33:15.505306  110500 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0114 10:33:15.505312  110500 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0114 10:33:15.505320  110500 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0114 10:33:15.505324  110500 command_runner.go:130] >   },
	I0114 10:33:15.505330  110500 command_runner.go:130] >   "golang": "go1.18.8",
	I0114 10:33:15.505335  110500 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0114 10:33:15.505342  110500 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0114 10:33:15.505345  110500 command_runner.go:130] > }
	I0114 10:33:15.505500  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:33:15.505509  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:33:15.505519  110500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:33:15.505532  110500 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102822 NodeName:multinode-102822-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:33:15.505648  110500 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-102822-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:33:15.505707  110500 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-102822-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:33:15.505750  110500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:33:15.512450  110500 command_runner.go:130] > kubeadm
	I0114 10:33:15.512472  110500 command_runner.go:130] > kubectl
	I0114 10:33:15.512480  110500 command_runner.go:130] > kubelet
	I0114 10:33:15.513054  110500 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:33:15.513107  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0114 10:33:15.519785  110500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0114 10:33:15.534253  110500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:33:15.546554  110500 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:33:15.549454  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:33:15.558528  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:33:15.558803  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:33:15.558758  110500 start.go:286] JoinCluster: &{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:33:15.558854  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0114 10:33:15.558894  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:33:15.583347  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:33:15.717142  110500 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 
	I0114 10:33:15.717199  110500 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:15.717241  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:33:15.717479  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102822-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0114 10:33:15.717511  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:33:15.742483  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:33:15.884475  110500 command_runner.go:130] > node/multinode-102822-m02 cordoned
	I0114 10:33:17.906182  110500 command_runner.go:130] > pod/busybox-65db55d5d6-jth2v deleted
	I0114 10:33:17.906203  110500 command_runner.go:130] > node/multinode-102822-m02 drained
	I0114 10:33:17.939112  110500 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0114 10:33:17.939144  110500 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bwgvn, kube-system/kube-proxy-4d5n6
	I0114 10:33:17.939174  110500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102822-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.221667006s)
	I0114 10:33:17.939190  110500 node.go:109] successfully drained node "m02"
	I0114 10:33:17.939610  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:33:17.939958  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:33:17.940363  110500 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0114 10:33:17.940424  110500 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:33:17.940434  110500 round_trippers.go:469] Request Headers:
	I0114 10:33:17.940446  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:33:17.940458  110500 round_trippers.go:473]     Content-Type: application/json
	I0114 10:33:17.940467  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:33:17.944032  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:33:17.944056  110500 round_trippers.go:577] Response Headers:
	I0114 10:33:17.944066  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:33:17 GMT
	I0114 10:33:17.944074  110500 round_trippers.go:580]     Audit-Id: 1447952b-a9b6-4bad-af8b-4518cb0f651f
	I0114 10:33:17.944082  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:33:17.944092  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:33:17.944099  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:33:17.944106  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:33:17.944118  110500 round_trippers.go:580]     Content-Length: 171
	I0114 10:33:17.944147  110500 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-102822-m02","kind":"nodes","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4"}}
	I0114 10:33:17.944184  110500 node.go:125] successfully deleted node "m02"
	I0114 10:33:17.944197  110500 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:17.944219  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:17.944239  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:17.981302  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:18.010540  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:18.010570  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:18.010578  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:18.010587  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:18.010596  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:18.010604  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:18.010620  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:18.010632  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:18.010643  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:18.010656  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:18.010668  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:18.010680  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:18.089584  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:18.089615  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 10:33:18.108662  110500 command_runner.go:130] ! W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:18.108687  110500 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 10:33:18.108704  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:18.108710  110500 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 10:33:18.108718  110500 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 10:33:18.108729  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:18.108739  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 10:33:18.108785  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:18.108798  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:18.108809  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:18.140909  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:18.141227  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:18.141249  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:18.141257  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:18.233452  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:18.688329  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:18.688363  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:18.688374  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:18.689316  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:18.689334  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:18.689341  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:18.689347  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:18.689354  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:18.689370  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:18.689382  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:18.691216  110500 command_runner.go:130] ! W0114 10:33:18.140748     931 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:18.691240  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:18.691258  110500 retry.go:31] will retry after 11.645600532s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.339744  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:30.339825  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:30.371164  110500 command_runner.go:130] ! W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:30.391839  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:30.474705  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:30.474732  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.476832  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:30.476856  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:30.476865  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:30.476871  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:30.476879  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:30.476888  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:30.476897  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:30.476907  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:30.476923  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:30.476933  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:30.476947  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:30.476959  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:30.476971  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:30.476983  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:30.476997  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 10:33:30.477078  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.477095  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:30.477113  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:30.507252  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:30.507323  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:30.507351  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:30.507364  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:30.510729  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:30.527843  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:30.527883  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:30.527895  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:30.527906  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:30.527919  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:30.527933  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:30.527944  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:30.527952  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:30.527963  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:30.527971  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:30.529841  110500 command_runner.go:130] ! W0114 10:33:30.506899    1326 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:30.529869  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:30.529889  110500 retry.go:31] will retry after 14.065712808s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.596274  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:44.596340  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:44.627227  110500 command_runner.go:130] ! W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:44.647728  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:44.732356  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:44.732389  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.734274  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:44.734294  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:44.734301  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:44.734305  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:44.734310  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:44.734316  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:44.734323  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:44.734328  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:44.734333  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:44.734338  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:44.734347  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:44.734352  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:44.734357  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:44.734362  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:44.734372  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 10:33:44.734418  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.734433  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:44.734444  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:44.764574  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:44.764594  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:44.764606  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:44.764759  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:44.768331  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:44.785135  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:44.785167  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:44.785178  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:44.785189  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:44.785200  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:44.785211  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:44.785217  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:44.785224  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:44.785239  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:44.785247  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:44.786933  110500 command_runner.go:130] ! W0114 10:33:44.764260    1389 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:44.786961  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:44.786980  110500 retry.go:31] will retry after 20.804343684s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:34:05.591739  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:34:05.591806  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:34:05.623871  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:34:05.645267  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:34:05.645298  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:34:05.645308  110500 command_runner.go:130] > OS: Linux
	I0114 10:34:05.645316  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:34:05.645324  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:34:05.645332  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:34:05.645344  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:34:05.645357  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:34:05.645370  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:34:05.645390  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:34:05.645402  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:34:05.645416  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:34:05.714434  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:34:05.714463  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 10:34:05.738856  110500 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:34:05.738889  110500 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:34:05.738898  110500 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 10:34:05.816843  110500 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0114 10:34:11.333257  110500 command_runner.go:130] > This node has joined the cluster:
	I0114 10:34:11.333282  110500 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0114 10:34:11.333289  110500 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0114 10:34:11.333295  110500 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0114 10:34:11.335498  110500 command_runner.go:130] ! W0114 10:34:05.623434    1411 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:34:11.335523  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:34:11.335543  110500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": (5.743723595s)
	I0114 10:34:11.335568  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0114 10:34:11.487045  110500 start.go:288] JoinCluster complete in 55.928280874s
	I0114 10:34:11.487079  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:34:11.487087  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:34:11.487145  110500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:34:11.490436  110500 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 10:34:11.490463  110500 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 10:34:11.490478  110500 command_runner.go:130] > Device: 34h/52d	Inode: 565966      Links: 1
	I0114 10:34:11.490489  110500 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:34:11.490502  110500 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:34:11.490512  110500 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:34:11.490523  110500 command_runner.go:130] > Change: 2023-01-14 10:06:59.488187836 +0000
	I0114 10:34:11.490531  110500 command_runner.go:130] >  Birth: -
	I0114 10:34:11.490577  110500 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:34:11.490589  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:34:11.503251  110500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:34:11.654512  110500 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:34:11.656123  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:34:11.657892  110500 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 10:34:11.665488  110500 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 10:34:11.669420  110500 start.go:212] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:34:11.671634  110500 out.go:177] * Verifying Kubernetes components...
	I0114 10:34:11.673414  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:34:11.682991  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:34:11.683197  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:34:11.683407  110500 node_ready.go:35] waiting up to 6m0s for node "multinode-102822-m02" to be "Ready" ...
	I0114 10:34:11.683462  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:34:11.683469  110500 round_trippers.go:469] Request Headers:
	I0114 10:34:11.683476  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:34:11.683486  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:34:11.685497  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:34:11.685521  110500 round_trippers.go:577] Response Headers:
	I0114 10:34:11.685532  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:34:11.685540  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:34:11.685548  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:34:11.685558  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:34:11.685571  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:34:11 GMT
	I0114 10:34:11.685584  110500 round_trippers.go:580]     Audit-Id: f79a2115-26cd-46bc-8cab-079a2a0ca5bf
	I0114 10:34:11.685686  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"f1608f12-9a61-41d8-b38b-2fa2b878a3bb","resourceVersion":"915","creationTimestamp":"2023-01-14T10:33:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:33:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kube-controller-manager","operation":"Update"," [truncated 4761 chars]
	I0114 10:34:11.686002  110500 node_ready.go:53] node "multinode-102822-m02" has status "Ready":"Unknown"
	I0114 10:34:11.686020  110500 node_ready.go:38] duration metric: took 2.599177ms waiting for node "multinode-102822-m02" to be "Ready" ...
	I0114 10:34:11.687814  110500 out.go:177] 
	W0114 10:34:11.689189  110500 out.go:239] X Exiting due to GUEST_START: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: node "multinode-102822-m02" has status "Ready":"Unknown"
	X Exiting due to GUEST_START: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: node "multinode-102822-m02" has status "Ready":"Unknown"
	W0114 10:34:11.689225  110500 out.go:239] * 
	* 
	W0114 10:34:11.690030  110500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:34:11.691589  110500 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-102822" : exit status 80
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-102822
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-102822
helpers_test.go:235: (dbg) docker inspect multinode-102822:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd",
	        "Created": "2023-01-14T10:28:29.623082209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 110804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:31:56.902992649Z",
	            "FinishedAt": "2023-01-14T10:31:34.915062825Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/hosts",
	        "LogPath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd-json.log",
	        "Name": "/multinode-102822",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-102822:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-102822",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3-init/diff:/var/lib/docker/overlay2/cfa67474dfffbd23c875ed1363951467d9d88e2b76451e5643f2505208741f3b/diff:/var/lib/docker/overlay2/073ec06077c9f139927a68d24e4f683141baf9acf954f7927a62d439b8e24069/diff:/var/lib/docker/overlay2/100e369464b40a65b67d4855b5a41f41832f93605f574ff35657d9b2d0ee5b4f/diff:/var/lib/docker/overlay2/e2f9a50fd4c46aeeaf52dd5d2c45c5548e516eaa4949cae4e8f8be3dda02e560/diff:/var/lib/docker/overlay2/6d3b34d6067ad9d3ff171a32fea0902c6748df9aeb5a46e12971cdc70934e200/diff:/var/lib/docker/overlay2/44f244a49f3260ebade676a0e6177935228bcd4504617609ee4343aa284e724c/diff:/var/lib/docker/overlay2/1cba83561484d9f781c67421553c95b75266d2217256379d5787e510ac28483f/diff:/var/lib/docker/overlay2/9ec5ab0f595877fa3d60d26e7aa243026d8b45fea861a3e12c469d81ab1ffe6c/diff:/var/lib/docker/overlay2/30d22319caaa0760daf22d54c95076cad3b970afb61aa7c018ac37b623117613/diff:/var/lib/docker/overlay2/1f5756
3ce3807317a405416fbe25b96e16e33708f4f97020c4f82e1e2b4da5ed/diff:/var/lib/docker/overlay2/604bdff9bf4c8bdcc970ae4f7e8734a5aa27c04fb328f61dea00c3740f12daba/diff:/var/lib/docker/overlay2/03f7c27604538c82d3d43dfde85aa33dc8f2658b93b51f500b27edd3b1aaed98/diff:/var/lib/docker/overlay2/f9ceccc940eb08b69d102c744810d1aff5795c7e9a58c20d43ca6857fa21b8ea/diff:/var/lib/docker/overlay2/576f7412e6f61feeea74cdfbae850007513e8aa407ce5e45f903c70ce2f89fe5/diff:/var/lib/docker/overlay2/958517a359371ca3276a50323466f96ec3d5d7687cb2f26c287a9a343fcbcd20/diff:/var/lib/docker/overlay2/c09247966342dd284c940bcd881b6187476a63e53055e9f378aaa25ceaa86263/diff:/var/lib/docker/overlay2/85bda0ea7bf5a8c05a6eb175b445c71a710e3e392fc1b70957e3902cec94586f/diff:/var/lib/docker/overlay2/7cde8ffb6999e9d99ff44b83daaf1a781dd6546a7a96eda5b901e88658c78f74/diff:/var/lib/docker/overlay2/92d42128dacdf015e3ce466b8e365093147199e2fffcda0192857efed322565f/diff:/var/lib/docker/overlay2/0f2dff826ddc5a3be056ecb8791656438fd8d9122e0bfa4bf808ff640ddd0366/diff:/var/lib/d
ocker/overlay2/44a9089aeee67c883a076dc1940e80698f487176c3d197f321518402ce7a4467/diff:/var/lib/docker/overlay2/6068fe71ba149c31fa6947b978b0755f11f334f9d40e14b5c9946cf9a103ca68/diff:/var/lib/docker/overlay2/adb5ed5619948c4b7e4d83048cd96cc3d6ded2ae453b67da2e120f4ada989e97/diff:/var/lib/docker/overlay2/d633ebbd9eed2900d2e31406be983b7d21e70ac3c07593de38c5cfb0628275ae/diff:/var/lib/docker/overlay2/87f4a27d0733b1bdf23169c5079f854d115bfd926c76a346d28259b8f2abc0f9/diff:/var/lib/docker/overlay2/4b514ac9d0ce1d6bff4ec77673304888b5a45fca7d9a52d872475d70a4bad242/diff:/var/lib/docker/overlay2/76f964a17c8531bd97500c5bf3aa0b003b317ad1c055c0d1c475d41666734b75/diff:/var/lib/docker/overlay2/0a0f3b972da362a17d673ffdcd0d42b3663faeed5e799b2b38868036d5cd1533/diff:/var/lib/docker/overlay2/a07c41d799979e1f64f7bf3d0bcd9a98b724ebea06eafa1a01b83c71c76f9d3c/diff:/var/lib/docker/overlay2/0be1fd774bf851dd17c525a17f8a015aa3c0f1f71b29033666a62cd2be3a495f/diff:/var/lib/docker/overlay2/62db7acc5b1cb93b6e26eb5c826b67cebb252c079fd5a060ba843227c91
c864f/diff:/var/lib/docker/overlay2/076dea682ce5421a9c145f8038044bf438f06c3635406efdf60ef350f109389f/diff:/var/lib/docker/overlay2/143de4d69dc548610d4e281cfb14bf70d7ed81172bee212fc15755591dea37b4/diff:/var/lib/docker/overlay2/89ecf87d7b563ffa220047c3bb13c7ea55ebb215cbd3d2731d795ce559d5b9b4/diff:/var/lib/docker/overlay2/e9f8c0a087f0832425535d00100392d8b267181825a52ae7291fb7fe7ab62614/diff:/var/lib/docker/overlay2/66fb715c26be36afdfe15f9e2562f7320c04421f7bff30da6424afc0395d1f19/diff:/var/lib/docker/overlay2/24d5a6709af6741b4216757263798c2fd2ffbe83a81f68619cd00e2107b4ff3d/diff:/var/lib/docker/overlay2/865a5915817b4d31f71061a418fcc1c284ee124c9b3a275c3676cb2b3fba32dd/diff:/var/lib/docker/overlay2/b33545ce05c040395c79c17ae2fc9b23755b589f9f6e2f94121abe1cc5c2869c/diff:/var/lib/docker/overlay2/22f66646b2dde6f03ac24f5affc8a43db7aaae6b2e9677ae4cf9e607238761e4/diff:/var/lib/docker/overlay2/789c281f8e044ab343c9800dc7431b8fbaf616ecd3419979e8a3dfbb605f8efe/diff:/var/lib/docker/overlay2/6dd50d303cdaa1e2fa047ed92b16580d8b0c2c
77552b9a13e0c356884add5310/diff:/var/lib/docker/overlay2/b1d8d5816bce1b48db468539e1bc343a7c87dee89fb1783174081611a7e0b2ee/diff:/var/lib/docker/overlay2/529b543dd76f6ad1b33944f7c0767adca9befb5d162c4c1bf13756f3c0048fb4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-102822",
	                "Source": "/var/lib/docker/volumes/multinode-102822/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-102822",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-102822",
	                "name.minikube.sigs.k8s.io": "multinode-102822",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c365116ec3d8c3847cc5a6224f1bf95c642d8cee266ac42fa0fe488c76ef78f7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32867"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32866"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32863"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32864"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c365116ec3d8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-102822": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f4311dd36b9",
	                        "multinode-102822"
	                    ],
	                    "NetworkID": "8d666bf786b0ec1697724ea7b42f362db065de718263087da543d41833d5baef",
	                    "EndpointID": "2cde720be508be7509e0a998f8cb46b885515ba56d7d9484e72335be284c5878",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-102822 -n multinode-102822
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-102822 logs -n 25: (1.533297213s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822:/home/docker/cp-test_multinode-102822-m02_multinode-102822.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822 sudo cat                                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m02_multinode-102822.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03:/home/docker/cp-test_multinode-102822-m02_multinode-102822-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822-m03 sudo cat                                   | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m02_multinode-102822-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp testdata/cp-test.txt                                                | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822:/home/docker/cp-test_multinode-102822-m03_multinode-102822.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822 sudo cat                                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m03_multinode-102822.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02:/home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822-m02 sudo cat                                   | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-102822 node stop m03                                                          | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	| node    | multinode-102822 node start                                                             | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:31 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC |                     |
	| stop    | -p multinode-102822                                                                     | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC | 14 Jan 23 10:31 UTC |
	| start   | -p multinode-102822                                                                     | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:31:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:31:56.244080  110500 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:31:56.244253  110500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:31:56.244261  110500 out.go:309] Setting ErrFile to fd 2...
	I0114 10:31:56.244266  110500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:31:56.244366  110500 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:31:56.244882  110500 out.go:303] Setting JSON to false
	I0114 10:31:56.246188  110500 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4464,"bootTime":1673687853,"procs":640,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:31:56.246254  110500 start.go:135] virtualization: kvm guest
	I0114 10:31:56.248786  110500 out.go:177] * [multinode-102822] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:31:56.250375  110500 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:31:56.250301  110500 notify.go:220] Checking for updates...
	I0114 10:31:56.253580  110500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:31:56.255205  110500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:31:56.256807  110500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:31:56.258293  110500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:31:56.260196  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:31:56.260244  110500 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:31:56.288513  110500 docker.go:138] docker version: linux-20.10.22
	I0114 10:31:56.288613  110500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:31:56.380775  110500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:31:56.306666417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:31:56.380877  110500 docker.go:255] overlay module found
	I0114 10:31:56.383058  110500 out.go:177] * Using the docker driver based on existing profile
	I0114 10:31:56.384332  110500 start.go:294] selected driver: docker
	I0114 10:31:56.384350  110500 start.go:838] validating driver "docker" against &{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
}
	I0114 10:31:56.384462  110500 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:31:56.384525  110500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:31:56.478549  110500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:31:56.403841818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:31:56.479153  110500 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:31:56.479180  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:31:56.479187  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:31:56.479205  110500 start_flags.go:319] config:
	{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-ins
taller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:31:56.482752  110500 out.go:177] * Starting control plane node multinode-102822 in cluster multinode-102822
	I0114 10:31:56.484264  110500 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:31:56.485726  110500 out.go:177] * Pulling base image ...
	I0114 10:31:56.487160  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:31:56.487205  110500 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:31:56.487226  110500 cache.go:57] Caching tarball of preloaded images
	I0114 10:31:56.487203  110500 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:31:56.487522  110500 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:31:56.487542  110500 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:31:56.487744  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:31:56.509755  110500 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:31:56.509787  110500 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:31:56.509802  110500 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:31:56.509837  110500 start.go:364] acquiring machines lock for multinode-102822: {Name:mkd70e1f2f35b7e6f7c31ed25602b988985e4fa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:31:56.509932  110500 start.go:368] acquired machines lock for "multinode-102822" in 68.904µs
	I0114 10:31:56.509951  110500 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:31:56.509955  110500 fix.go:55] fixHost starting: 
	I0114 10:31:56.510146  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:31:56.531979  110500 fix.go:103] recreateIfNeeded on multinode-102822: state=Stopped err=<nil>
	W0114 10:31:56.532013  110500 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:31:56.535180  110500 out.go:177] * Restarting existing docker container for "multinode-102822" ...
	I0114 10:31:56.536670  110500 cli_runner.go:164] Run: docker start multinode-102822
	I0114 10:31:56.910511  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:31:56.935016  110500 kic.go:426] container "multinode-102822" state is running.
	I0114 10:31:56.935341  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:31:56.958657  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:31:56.958868  110500 machine.go:88] provisioning docker machine ...
	I0114 10:31:56.958889  110500 ubuntu.go:169] provisioning hostname "multinode-102822"
	I0114 10:31:56.958926  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:31:56.981260  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:31:56.981492  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0114 10:31:56.981520  110500 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102822 && echo "multinode-102822" | sudo tee /etc/hostname
	I0114 10:31:56.982146  110500 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38962->127.0.0.1:32867: read: connection reset by peer
	I0114 10:32:00.107919  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102822
	
	I0114 10:32:00.107984  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.131658  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:00.131837  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0114 10:32:00.131856  110500 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102822/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:32:00.247376  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:32:00.247412  110500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:32:00.247433  110500 ubuntu.go:177] setting up certificates
	I0114 10:32:00.247441  110500 provision.go:83] configureAuth start
	I0114 10:32:00.247481  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:32:00.270071  110500 provision.go:138] copyHostCerts
	I0114 10:32:00.270112  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:00.270162  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:32:00.270173  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:00.270248  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:32:00.270337  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:00.270358  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:32:00.270365  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:00.270400  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:32:00.270455  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:00.270478  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:32:00.270487  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:00.270524  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:32:00.270583  110500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.multinode-102822 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-102822]
	I0114 10:32:00.494150  110500 provision.go:172] copyRemoteCerts
	I0114 10:32:00.494232  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:32:00.494276  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.517022  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.602641  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 10:32:00.602710  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:32:00.619533  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 10:32:00.619601  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0114 10:32:00.635920  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 10:32:00.635984  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 10:32:00.652526  110500 provision.go:86] duration metric: configureAuth took 405.072699ms
	I0114 10:32:00.652560  110500 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:32:00.652742  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:00.652754  110500 machine.go:91] provisioned docker machine in 3.693874899s
	I0114 10:32:00.652761  110500 start.go:300] post-start starting for "multinode-102822" (driver="docker")
	I0114 10:32:00.652767  110500 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:32:00.652803  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:32:00.652841  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.676636  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.758928  110500 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:32:00.761499  110500 command_runner.go:130] > NAME="Ubuntu"
	I0114 10:32:00.761517  110500 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 10:32:00.761524  110500 command_runner.go:130] > ID=ubuntu
	I0114 10:32:00.761532  110500 command_runner.go:130] > ID_LIKE=debian
	I0114 10:32:00.761540  110500 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 10:32:00.761548  110500 command_runner.go:130] > VERSION_ID="20.04"
	I0114 10:32:00.761559  110500 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 10:32:00.761567  110500 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 10:32:00.761572  110500 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 10:32:00.761584  110500 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 10:32:00.761591  110500 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 10:32:00.761595  110500 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 10:32:00.761748  110500 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:32:00.761772  110500 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:32:00.761786  110500 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:32:00.761796  110500 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:32:00.761810  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:32:00.761869  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:32:00.761948  110500 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:32:00.761962  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /etc/ssl/certs/103062.pem
	I0114 10:32:00.762051  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:32:00.768638  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:32:00.785666  110500 start.go:303] post-start completed in 132.893086ms
	I0114 10:32:00.785739  110500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:32:00.785780  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.808883  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.892008  110500 command_runner.go:130] > 18%!
	(MISSING)I0114 10:32:00.892093  110500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:32:00.895774  110500 command_runner.go:130] > 239G
	I0114 10:32:00.895937  110500 fix.go:57] fixHost completed within 4.385975679s
	I0114 10:32:00.895960  110500 start.go:83] releasing machines lock for "multinode-102822", held for 4.386015126s
	I0114 10:32:00.896044  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:32:00.919896  110500 ssh_runner.go:195] Run: cat /version.json
	I0114 10:32:00.919947  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.919973  110500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:32:00.920028  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.942987  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.946487  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:01.054033  110500 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 10:32:01.054097  110500 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0114 10:32:01.054188  110500 ssh_runner.go:195] Run: systemctl --version
	I0114 10:32:01.057819  110500 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0114 10:32:01.057844  110500 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0114 10:32:01.058053  110500 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:32:01.068862  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:32:01.077997  110500 docker.go:189] disabling docker service ...
	I0114 10:32:01.078119  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:32:01.087867  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:32:01.096584  110500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:32:01.179660  110500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:32:01.257778  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:32:01.266818  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:32:01.278503  110500 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:32:01.278530  110500 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:32:01.279238  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:32:01.286923  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:32:01.294475  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:32:01.302050  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:32:01.309511  110500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:32:01.314863  110500 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0114 10:32:01.315392  110500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:32:01.321309  110500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:32:01.393049  110500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:32:01.455546  110500 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:32:01.455627  110500 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:32:01.458967  110500 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0114 10:32:01.458992  110500 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 10:32:01.458999  110500 command_runner.go:130] > Device: 3fh/63d	Inode: 109         Links: 1
	I0114 10:32:01.459006  110500 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:32:01.459012  110500 command_runner.go:130] > Access: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459016  110500 command_runner.go:130] > Modify: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459023  110500 command_runner.go:130] > Change: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459028  110500 command_runner.go:130] >  Birth: -
	I0114 10:32:01.459049  110500 start.go:472] Will wait 60s for crictl version
	I0114 10:32:01.459115  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:32:01.462116  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:32:01.462198  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:32:01.488685  110500 command_runner.go:130] ! time="2023-01-14T10:32:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:32:01.488775  110500 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:32:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:32:12.536033  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:32:12.557499  110500 command_runner.go:130] > Version:  0.1.0
	I0114 10:32:12.557525  110500 command_runner.go:130] > RuntimeName:  containerd
	I0114 10:32:12.557533  110500 command_runner.go:130] > RuntimeVersion:  1.6.10
	I0114 10:32:12.557540  110500 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0114 10:32:12.559041  110500 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:32:12.559089  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:32:12.580521  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:32:12.581939  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:32:12.602970  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:32:12.607003  110500 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:32:12.608552  110500 cli_runner.go:164] Run: docker network inspect multinode-102822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:32:12.630384  110500 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0114 10:32:12.633652  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:32:12.642818  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:32:12.642867  110500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:32:12.665261  110500 command_runner.go:130] > {
	I0114 10:32:12.665286  110500 command_runner.go:130] >   "images": [
	I0114 10:32:12.665292  110500 command_runner.go:130] >     {
	I0114 10:32:12.665303  110500 command_runner.go:130] >       "id": "sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f",
	I0114 10:32:12.665311  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665320  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20221004-44d545d1"
	I0114 10:32:12.665326  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665335  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665344  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"
	I0114 10:32:12.665354  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665359  110500 command_runner.go:130] >       "size": "25830582",
	I0114 10:32:12.665363  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665369  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665374  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665383  110500 command_runner.go:130] >     },
	I0114 10:32:12.665390  110500 command_runner.go:130] >     {
	I0114 10:32:12.665397  110500 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0114 10:32:12.665403  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665409  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0114 10:32:12.665415  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665419  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665426  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0114 10:32:12.665432  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665437  110500 command_runner.go:130] >       "size": "725911",
	I0114 10:32:12.665443  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665448  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665454  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665458  110500 command_runner.go:130] >     },
	I0114 10:32:12.665464  110500 command_runner.go:130] >     {
	I0114 10:32:12.665470  110500 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0114 10:32:12.665477  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665482  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:32:12.665488  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665496  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665509  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0114 10:32:12.665515  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665519  110500 command_runner.go:130] >       "size": "9058936",
	I0114 10:32:12.665526  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665530  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665537  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665540  110500 command_runner.go:130] >     },
	I0114 10:32:12.665546  110500 command_runner.go:130] >     {
	I0114 10:32:12.665553  110500 command_runner.go:130] >       "id": "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a",
	I0114 10:32:12.665561  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665570  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:32:12.665576  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665581  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665590  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"
	I0114 10:32:12.665596  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665600  110500 command_runner.go:130] >       "size": "14837849",
	I0114 10:32:12.665607  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665611  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665617  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665621  110500 command_runner.go:130] >     },
	I0114 10:32:12.665627  110500 command_runner.go:130] >     {
	I0114 10:32:12.665634  110500 command_runner.go:130] >       "id": "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66",
	I0114 10:32:12.665650  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665658  110500 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.4-0"
	I0114 10:32:12.665662  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665668  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665675  110500 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"
	I0114 10:32:12.665681  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665686  110500 command_runner.go:130] >       "size": "102157811",
	I0114 10:32:12.665692  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665696  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665702  110500 command_runner.go:130] >       },
	I0114 10:32:12.665706  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665716  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665724  110500 command_runner.go:130] >     },
	I0114 10:32:12.665731  110500 command_runner.go:130] >     {
	I0114 10:32:12.665737  110500 command_runner.go:130] >       "id": "sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0",
	I0114 10:32:12.665744  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665750  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:32:12.665754  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665760  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665770  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"
	I0114 10:32:12.665776  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665780  110500 command_runner.go:130] >       "size": "34238163",
	I0114 10:32:12.665786  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665790  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665796  110500 command_runner.go:130] >       },
	I0114 10:32:12.665801  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665807  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665810  110500 command_runner.go:130] >     },
	I0114 10:32:12.665816  110500 command_runner.go:130] >     {
	I0114 10:32:12.665823  110500 command_runner.go:130] >       "id": "sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a",
	I0114 10:32:12.665830  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665835  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:32:12.665842  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665846  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665856  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"
	I0114 10:32:12.665862  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665867  110500 command_runner.go:130] >       "size": "31261869",
	I0114 10:32:12.665873  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665877  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665887  110500 command_runner.go:130] >       },
	I0114 10:32:12.665891  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665899  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665906  110500 command_runner.go:130] >     },
	I0114 10:32:12.665910  110500 command_runner.go:130] >     {
	I0114 10:32:12.665916  110500 command_runner.go:130] >       "id": "sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041",
	I0114 10:32:12.665920  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665927  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:32:12.665931  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665937  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665945  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"
	I0114 10:32:12.665951  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665956  110500 command_runner.go:130] >       "size": "20265805",
	I0114 10:32:12.665962  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665966  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665973  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665978  110500 command_runner.go:130] >     },
	I0114 10:32:12.665984  110500 command_runner.go:130] >     {
	I0114 10:32:12.665990  110500 command_runner.go:130] >       "id": "sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912",
	I0114 10:32:12.665997  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.666002  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:32:12.666008  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666012  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.666022  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"
	I0114 10:32:12.666028  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666032  110500 command_runner.go:130] >       "size": "15798744",
	I0114 10:32:12.666038  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.666042  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.666048  110500 command_runner.go:130] >       },
	I0114 10:32:12.666052  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.666059  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.666063  110500 command_runner.go:130] >     },
	I0114 10:32:12.666069  110500 command_runner.go:130] >     {
	I0114 10:32:12.666075  110500 command_runner.go:130] >       "id": "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517",
	I0114 10:32:12.666083  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.666088  110500 command_runner.go:130] >         "registry.k8s.io/pause:3.8"
	I0114 10:32:12.666094  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666099  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.666108  110500 command_runner.go:130] >         "registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"
	I0114 10:32:12.666114  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666127  110500 command_runner.go:130] >       "size": "311286",
	I0114 10:32:12.666133  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.666138  110500 command_runner.go:130] >         "value": "65535"
	I0114 10:32:12.666143  110500 command_runner.go:130] >       },
	I0114 10:32:12.666148  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.666154  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.666158  110500 command_runner.go:130] >     }
	I0114 10:32:12.666163  110500 command_runner.go:130] >   ]
	I0114 10:32:12.666166  110500 command_runner.go:130] > }
	I0114 10:32:12.666314  110500 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:32:12.666326  110500 containerd.go:467] Images already preloaded, skipping extraction
	I0114 10:32:12.666364  110500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:32:12.687374  110500 command_runner.go:130] > {
	I0114 10:32:12.687399  110500 command_runner.go:130] >   "images": [
	I0114 10:32:12.687405  110500 command_runner.go:130] >     {
	I0114 10:32:12.687415  110500 command_runner.go:130] >       "id": "sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f",
	I0114 10:32:12.687421  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687428  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20221004-44d545d1"
	I0114 10:32:12.687433  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687440  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687460  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"
	I0114 10:32:12.687475  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687483  110500 command_runner.go:130] >       "size": "25830582",
	I0114 10:32:12.687490  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687496  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687503  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687509  110500 command_runner.go:130] >     },
	I0114 10:32:12.687514  110500 command_runner.go:130] >     {
	I0114 10:32:12.687523  110500 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0114 10:32:12.687533  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687545  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0114 10:32:12.687554  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687564  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687580  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0114 10:32:12.687590  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687600  110500 command_runner.go:130] >       "size": "725911",
	I0114 10:32:12.687609  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687617  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687623  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687632  110500 command_runner.go:130] >     },
	I0114 10:32:12.687644  110500 command_runner.go:130] >     {
	I0114 10:32:12.687658  110500 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0114 10:32:12.687668  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687695  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:32:12.687702  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687713  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687734  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0114 10:32:12.687743  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687753  110500 command_runner.go:130] >       "size": "9058936",
	I0114 10:32:12.687763  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687771  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687781  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687797  110500 command_runner.go:130] >     },
	I0114 10:32:12.687804  110500 command_runner.go:130] >     {
	I0114 10:32:12.687818  110500 command_runner.go:130] >       "id": "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a",
	I0114 10:32:12.687828  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687839  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:32:12.687848  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687858  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687869  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"
	I0114 10:32:12.687877  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687884  110500 command_runner.go:130] >       "size": "14837849",
	I0114 10:32:12.687895  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687903  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687913  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687922  110500 command_runner.go:130] >     },
	I0114 10:32:12.687930  110500 command_runner.go:130] >     {
	I0114 10:32:12.687946  110500 command_runner.go:130] >       "id": "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66",
	I0114 10:32:12.687956  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687965  110500 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.4-0"
	I0114 10:32:12.687969  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687976  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687991  110500 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"
	I0114 10:32:12.688000  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688008  110500 command_runner.go:130] >       "size": "102157811",
	I0114 10:32:12.688018  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688027  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688037  110500 command_runner.go:130] >       },
	I0114 10:32:12.688046  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688060  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688069  110500 command_runner.go:130] >     },
	I0114 10:32:12.688075  110500 command_runner.go:130] >     {
	I0114 10:32:12.688086  110500 command_runner.go:130] >       "id": "sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0",
	I0114 10:32:12.688096  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688106  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:32:12.688116  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688126  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688141  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"
	I0114 10:32:12.688151  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688161  110500 command_runner.go:130] >       "size": "34238163",
	I0114 10:32:12.688168  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688174  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688181  110500 command_runner.go:130] >       },
	I0114 10:32:12.688192  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688199  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688208  110500 command_runner.go:130] >     },
	I0114 10:32:12.688217  110500 command_runner.go:130] >     {
	I0114 10:32:12.688228  110500 command_runner.go:130] >       "id": "sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a",
	I0114 10:32:12.688238  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688250  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:32:12.688258  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688266  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688278  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"
	I0114 10:32:12.688287  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688295  110500 command_runner.go:130] >       "size": "31261869",
	I0114 10:32:12.688304  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688314  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688323  110500 command_runner.go:130] >       },
	I0114 10:32:12.688333  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688342  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688352  110500 command_runner.go:130] >     },
	I0114 10:32:12.688361  110500 command_runner.go:130] >     {
	I0114 10:32:12.688374  110500 command_runner.go:130] >       "id": "sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041",
	I0114 10:32:12.688387  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688396  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:32:12.688402  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688408  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688418  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"
	I0114 10:32:12.688431  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688439  110500 command_runner.go:130] >       "size": "20265805",
	I0114 10:32:12.688447  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.688455  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688465  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688471  110500 command_runner.go:130] >     },
	I0114 10:32:12.688484  110500 command_runner.go:130] >     {
	I0114 10:32:12.688495  110500 command_runner.go:130] >       "id": "sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912",
	I0114 10:32:12.688502  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688511  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:32:12.688520  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688527  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688542  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"
	I0114 10:32:12.688551  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688558  110500 command_runner.go:130] >       "size": "15798744",
	I0114 10:32:12.688566  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688571  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688580  110500 command_runner.go:130] >       },
	I0114 10:32:12.688587  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688597  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688603  110500 command_runner.go:130] >     },
	I0114 10:32:12.688613  110500 command_runner.go:130] >     {
	I0114 10:32:12.688626  110500 command_runner.go:130] >       "id": "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517",
	I0114 10:32:12.688636  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688644  110500 command_runner.go:130] >         "registry.k8s.io/pause:3.8"
	I0114 10:32:12.688653  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688661  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688677  110500 command_runner.go:130] >         "registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"
	I0114 10:32:12.688684  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688742  110500 command_runner.go:130] >       "size": "311286",
	I0114 10:32:12.688753  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688761  110500 command_runner.go:130] >         "value": "65535"
	I0114 10:32:12.688767  110500 command_runner.go:130] >       },
	I0114 10:32:12.688775  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688790  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688801  110500 command_runner.go:130] >     }
	I0114 10:32:12.688808  110500 command_runner.go:130] >   ]
	I0114 10:32:12.688813  110500 command_runner.go:130] > }
	I0114 10:32:12.689381  110500 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:32:12.689398  110500 cache_images.go:84] Images are preloaded, skipping loading
	I0114 10:32:12.689437  110500 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:32:12.710587  110500 command_runner.go:130] > {
	I0114 10:32:12.710608  110500 command_runner.go:130] >   "status": {
	I0114 10:32:12.710615  110500 command_runner.go:130] >     "conditions": [
	I0114 10:32:12.710621  110500 command_runner.go:130] >       {
	I0114 10:32:12.710628  110500 command_runner.go:130] >         "type": "RuntimeReady",
	I0114 10:32:12.710634  110500 command_runner.go:130] >         "status": true,
	I0114 10:32:12.710640  110500 command_runner.go:130] >         "reason": "",
	I0114 10:32:12.710646  110500 command_runner.go:130] >         "message": ""
	I0114 10:32:12.710651  110500 command_runner.go:130] >       },
	I0114 10:32:12.710657  110500 command_runner.go:130] >       {
	I0114 10:32:12.710668  110500 command_runner.go:130] >         "type": "NetworkReady",
	I0114 10:32:12.710677  110500 command_runner.go:130] >         "status": true,
	I0114 10:32:12.710687  110500 command_runner.go:130] >         "reason": "",
	I0114 10:32:12.710696  110500 command_runner.go:130] >         "message": ""
	I0114 10:32:12.710705  110500 command_runner.go:130] >       }
	I0114 10:32:12.710713  110500 command_runner.go:130] >     ]
	I0114 10:32:12.710720  110500 command_runner.go:130] >   },
	I0114 10:32:12.710728  110500 command_runner.go:130] >   "cniconfig": {
	I0114 10:32:12.710738  110500 command_runner.go:130] >     "PluginDirs": [
	I0114 10:32:12.710749  110500 command_runner.go:130] >       "/opt/cni/bin"
	I0114 10:32:12.710758  110500 command_runner.go:130] >     ],
	I0114 10:32:12.710773  110500 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.mk",
	I0114 10:32:12.710784  110500 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0114 10:32:12.710792  110500 command_runner.go:130] >     "Prefix": "eth",
	I0114 10:32:12.710802  110500 command_runner.go:130] >     "Networks": [
	I0114 10:32:12.710812  110500 command_runner.go:130] >       {
	I0114 10:32:12.710820  110500 command_runner.go:130] >         "Config": {
	I0114 10:32:12.710835  110500 command_runner.go:130] >           "Name": "cni-loopback",
	I0114 10:32:12.710847  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:32:12.710856  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:32:12.710866  110500 command_runner.go:130] >             {
	I0114 10:32:12.710875  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.710886  110500 command_runner.go:130] >                 "type": "loopback",
	I0114 10:32:12.710896  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:32:12.710902  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.710907  110500 command_runner.go:130] >               },
	I0114 10:32:12.710917  110500 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0114 10:32:12.710927  110500 command_runner.go:130] >             }
	I0114 10:32:12.710936  110500 command_runner.go:130] >           ],
	I0114 10:32:12.710949  110500 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0114 10:32:12.710956  110500 command_runner.go:130] >         },
	I0114 10:32:12.710967  110500 command_runner.go:130] >         "IFName": "lo"
	I0114 10:32:12.710977  110500 command_runner.go:130] >       },
	I0114 10:32:12.710986  110500 command_runner.go:130] >       {
	I0114 10:32:12.710994  110500 command_runner.go:130] >         "Config": {
	I0114 10:32:12.711008  110500 command_runner.go:130] >           "Name": "kindnet",
	I0114 10:32:12.711018  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:32:12.711025  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:32:12.711035  110500 command_runner.go:130] >             {
	I0114 10:32:12.711044  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.711055  110500 command_runner.go:130] >                 "type": "ptp",
	I0114 10:32:12.711066  110500 command_runner.go:130] >                 "ipam": {
	I0114 10:32:12.711078  110500 command_runner.go:130] >                   "type": "host-local"
	I0114 10:32:12.711088  110500 command_runner.go:130] >                 },
	I0114 10:32:12.711096  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.711106  110500 command_runner.go:130] >               },
	I0114 10:32:12.711127  110500 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0114 10:32:12.711140  110500 command_runner.go:130] >             },
	I0114 10:32:12.711150  110500 command_runner.go:130] >             {
	I0114 10:32:12.711159  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.711172  110500 command_runner.go:130] >                 "type": "portmap",
	I0114 10:32:12.711183  110500 command_runner.go:130] >                 "capabilities": {
	I0114 10:32:12.711194  110500 command_runner.go:130] >                   "portMappings": true
	I0114 10:32:12.711201  110500 command_runner.go:130] >                 },
	I0114 10:32:12.711211  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:32:12.711223  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.711231  110500 command_runner.go:130] >               },
	I0114 10:32:12.711245  110500 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0114 10:32:12.711255  110500 command_runner.go:130] >             }
	I0114 10:32:12.711263  110500 command_runner.go:130] >           ],
	I0114 10:32:12.711307  110500 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0114 10:32:12.711317  110500 command_runner.go:130] >         },
	I0114 10:32:12.711325  110500 command_runner.go:130] >         "IFName": "eth0"
	I0114 10:32:12.711331  110500 command_runner.go:130] >       }
	I0114 10:32:12.711337  110500 command_runner.go:130] >     ]
	I0114 10:32:12.711348  110500 command_runner.go:130] >   },
	I0114 10:32:12.711358  110500 command_runner.go:130] >   "config": {
	I0114 10:32:12.711366  110500 command_runner.go:130] >     "containerd": {
	I0114 10:32:12.711377  110500 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0114 10:32:12.711388  110500 command_runner.go:130] >       "defaultRuntimeName": "default",
	I0114 10:32:12.711399  110500 command_runner.go:130] >       "defaultRuntime": {
	I0114 10:32:12.711409  110500 command_runner.go:130] >         "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711419  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:32:12.711430  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:32:12.711438  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:32:12.711449  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:32:12.711461  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:32:12.711472  110500 command_runner.go:130] >         "options": null,
	I0114 10:32:12.711484  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:32:12.711495  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:32:12.711504  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:32:12.711513  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:32:12.711522  110500 command_runner.go:130] >       },
	I0114 10:32:12.711531  110500 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0114 10:32:12.711542  110500 command_runner.go:130] >         "runtimeType": "",
	I0114 10:32:12.711553  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:32:12.711563  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:32:12.711575  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:32:12.711586  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:32:12.711597  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:32:12.711607  110500 command_runner.go:130] >         "options": null,
	I0114 10:32:12.711616  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:32:12.711627  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:32:12.711638  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:32:12.711649  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:32:12.711658  110500 command_runner.go:130] >       },
	I0114 10:32:12.711684  110500 command_runner.go:130] >       "runtimes": {
	I0114 10:32:12.711694  110500 command_runner.go:130] >         "default": {
	I0114 10:32:12.711706  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711717  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:32:12.711728  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:32:12.711739  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:32:12.711751  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:32:12.711762  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:32:12.711781  110500 command_runner.go:130] >           "options": null,
	I0114 10:32:12.711794  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:32:12.711805  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:32:12.711816  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:32:12.711826  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:32:12.711837  110500 command_runner.go:130] >         },
	I0114 10:32:12.711848  110500 command_runner.go:130] >         "runc": {
	I0114 10:32:12.711861  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711871  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:32:12.711879  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:32:12.711890  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:32:12.711902  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:32:12.711912  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:32:12.711923  110500 command_runner.go:130] >           "options": {
	I0114 10:32:12.711975  110500 command_runner.go:130] >             "SystemdCgroup": false
	I0114 10:32:12.711989  110500 command_runner.go:130] >           },
	I0114 10:32:12.711998  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:32:12.712006  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:32:12.712017  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:32:12.712028  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:32:12.712036  110500 command_runner.go:130] >         }
	I0114 10:32:12.712045  110500 command_runner.go:130] >       },
	I0114 10:32:12.712057  110500 command_runner.go:130] >       "noPivot": false,
	I0114 10:32:12.712068  110500 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0114 10:32:12.712078  110500 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0114 10:32:12.712089  110500 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0114 10:32:12.712099  110500 command_runner.go:130] >     },
	I0114 10:32:12.712107  110500 command_runner.go:130] >     "cni": {
	I0114 10:32:12.712118  110500 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0114 10:32:12.712130  110500 command_runner.go:130] >       "confDir": "/etc/cni/net.mk",
	I0114 10:32:12.712140  110500 command_runner.go:130] >       "maxConfNum": 1,
	I0114 10:32:12.712151  110500 command_runner.go:130] >       "confTemplate": "",
	I0114 10:32:12.712161  110500 command_runner.go:130] >       "ipPref": ""
	I0114 10:32:12.712170  110500 command_runner.go:130] >     },
	I0114 10:32:12.712177  110500 command_runner.go:130] >     "registry": {
	I0114 10:32:12.712189  110500 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0114 10:32:12.712199  110500 command_runner.go:130] >       "mirrors": null,
	I0114 10:32:12.712209  110500 command_runner.go:130] >       "configs": null,
	I0114 10:32:12.712220  110500 command_runner.go:130] >       "auths": null,
	I0114 10:32:12.712232  110500 command_runner.go:130] >       "headers": null
	I0114 10:32:12.712242  110500 command_runner.go:130] >     },
	I0114 10:32:12.712251  110500 command_runner.go:130] >     "imageDecryption": {
	I0114 10:32:12.712261  110500 command_runner.go:130] >       "keyModel": "node"
	I0114 10:32:12.712267  110500 command_runner.go:130] >     },
	I0114 10:32:12.712274  110500 command_runner.go:130] >     "disableTCPService": true,
	I0114 10:32:12.712281  110500 command_runner.go:130] >     "streamServerAddress": "",
	I0114 10:32:12.712292  110500 command_runner.go:130] >     "streamServerPort": "10010",
	I0114 10:32:12.712303  110500 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0114 10:32:12.712312  110500 command_runner.go:130] >     "enableSelinux": false,
	I0114 10:32:12.712324  110500 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0114 10:32:12.712337  110500 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.8",
	I0114 10:32:12.712348  110500 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0114 10:32:12.712360  110500 command_runner.go:130] >     "systemdCgroup": false,
	I0114 10:32:12.712368  110500 command_runner.go:130] >     "enableTLSStreaming": false,
	I0114 10:32:12.712379  110500 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0114 10:32:12.712388  110500 command_runner.go:130] >       "tlsCertFile": "",
	I0114 10:32:12.712398  110500 command_runner.go:130] >       "tlsKeyFile": ""
	I0114 10:32:12.712407  110500 command_runner.go:130] >     },
	I0114 10:32:12.712417  110500 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0114 10:32:12.712427  110500 command_runner.go:130] >     "disableCgroup": false,
	I0114 10:32:12.712436  110500 command_runner.go:130] >     "disableApparmor": false,
	I0114 10:32:12.712446  110500 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0114 10:32:12.712455  110500 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0114 10:32:12.712466  110500 command_runner.go:130] >     "disableProcMount": false,
	I0114 10:32:12.712477  110500 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0114 10:32:12.712487  110500 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0114 10:32:12.712496  110500 command_runner.go:130] >     "disableHugetlbController": true,
	I0114 10:32:12.712508  110500 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0114 10:32:12.712519  110500 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0114 10:32:12.712530  110500 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0114 10:32:12.712544  110500 command_runner.go:130] >     "enableUnprivilegedPorts": false,
	I0114 10:32:12.712557  110500 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0114 10:32:12.712569  110500 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0114 10:32:12.712582  110500 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0114 10:32:12.712594  110500 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0114 10:32:12.712607  110500 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0114 10:32:12.712615  110500 command_runner.go:130] >   },
	I0114 10:32:12.712623  110500 command_runner.go:130] >   "golang": "go1.18.8",
	I0114 10:32:12.712635  110500 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0114 10:32:12.712647  110500 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0114 10:32:12.712656  110500 command_runner.go:130] > }
	I0114 10:32:12.712858  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:32:12.712872  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:32:12.712887  110500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:32:12.712904  110500 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102822 NodeName:multinode-102822 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:32:12.713036  110500 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-102822"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:32:12.713135  110500 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-102822 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:32:12.713190  110500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:32:12.719488  110500 command_runner.go:130] > kubeadm
	I0114 10:32:12.719509  110500 command_runner.go:130] > kubectl
	I0114 10:32:12.719515  110500 command_runner.go:130] > kubelet
	I0114 10:32:12.720035  110500 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:32:12.720098  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:32:12.726696  110500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0114 10:32:12.738909  110500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:32:12.751222  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes)
	I0114 10:32:12.763791  110500 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:32:12.766553  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:32:12.775632  110500 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822 for IP: 192.168.58.2
	I0114 10:32:12.775780  110500 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:32:12.775823  110500 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:32:12.775880  110500 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key
	I0114 10:32:12.775939  110500 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key.cee25041
	I0114 10:32:12.775975  110500 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key
	I0114 10:32:12.775986  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0114 10:32:12.775995  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0114 10:32:12.776009  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0114 10:32:12.776020  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0114 10:32:12.776030  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 10:32:12.776040  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 10:32:12.776050  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:32:12.776060  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:32:12.776095  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:32:12.776118  110500 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:32:12.776127  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:32:12.776146  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:32:12.776170  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:32:12.776190  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:32:12.776223  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:32:12.776254  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem -> /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.776268  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /usr/share/ca-certificates/103062.pem
	I0114 10:32:12.776276  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:12.776801  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:32:12.793649  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:32:12.809955  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:32:12.826165  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:32:12.842333  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:32:12.858766  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:32:12.874864  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:32:12.891037  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:32:12.907157  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:32:12.923498  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:32:12.940509  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:32:12.957076  110500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:32:12.969247  110500 ssh_runner.go:195] Run: openssl version
	I0114 10:32:12.973757  110500 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 10:32:12.973888  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:32:12.980925  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983712  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983767  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983798  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.988253  110500 command_runner.go:130] > 51391683
	I0114 10:32:12.988302  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:32:12.994808  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:32:13.001692  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004632  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004666  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004710  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.009155  110500 command_runner.go:130] > 3ec20f2e
	I0114 10:32:13.009284  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:32:13.015757  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:32:13.022799  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025630  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025669  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025717  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.030092  110500 command_runner.go:130] > b5213941
	I0114 10:32:13.030263  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:32:13.036698  110500 kubeadm.go:396] StartCluster: {Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:fals
e logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:32:13.036791  110500 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:32:13.036836  110500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:32:13.058223  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:13.058244  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:13.058251  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:13.058260  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:13.058269  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:13.058277  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:13.058286  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:13.058300  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:13.060045  110500 cri.go:87] found id: "8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	I0114 10:32:13.060064  110500 cri.go:87] found id: "dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653"
	I0114 10:32:13.060072  110500 cri.go:87] found id: "fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642"
	I0114 10:32:13.060078  110500 cri.go:87] found id: "6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2"
	I0114 10:32:13.060084  110500 cri.go:87] found id: "1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028"
	I0114 10:32:13.060094  110500 cri.go:87] found id: "9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:32:13.060102  110500 cri.go:87] found id: "72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf"
	I0114 10:32:13.060109  110500 cri.go:87] found id: "1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2"
	I0114 10:32:13.060124  110500 cri.go:87] found id: ""
	I0114 10:32:13.060170  110500 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0114 10:32:13.070971  110500 command_runner.go:130] > null
	I0114 10:32:13.071003  110500 cri.go:114] JSON = null
	W0114 10:32:13.071044  110500 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0114 10:32:13.071091  110500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:32:13.077185  110500 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0114 10:32:13.077211  110500 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0114 10:32:13.077219  110500 command_runner.go:130] > /var/lib/minikube/etcd:
	I0114 10:32:13.077224  110500 command_runner.go:130] > member
	I0114 10:32:13.077710  110500 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:32:13.077727  110500 kubeadm.go:627] restartCluster start
	I0114 10:32:13.077773  110500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:32:13.083937  110500 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.084292  110500 kubeconfig.go:135] verify returned: extract IP: "multinode-102822" does not appear in /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:13.084399  110500 kubeconfig.go:146] "multinode-102822" context is missing from /home/jenkins/minikube-integration/15642-3818/kubeconfig - will repair!
	I0114 10:32:13.084667  110500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:13.085127  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:13.085339  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:13.085718  110500 cert_rotation.go:137] Starting client certificate rotation controller
	I0114 10:32:13.085897  110500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:32:13.092314  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.092361  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.099983  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.300390  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.300471  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.309061  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.500394  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.500496  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.508843  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.700076  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.700158  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.708648  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.900982  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.901059  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.909312  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.100665  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.100752  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.108909  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.300163  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.300255  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.308427  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.500734  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.500820  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.509529  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.700875  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.700944  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.709372  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.900776  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.900858  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.909023  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.100211  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.100291  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.108635  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.300982  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.301056  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.309251  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.500594  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.500686  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.509036  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.700390  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.700477  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.709024  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.900241  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.900308  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.908659  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.101018  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:16.101096  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:16.109426  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.109444  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:16.109480  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:16.117151  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.117179  110500 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 10:32:16.117187  110500 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:32:16.117204  110500 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:32:16.117249  110500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:32:16.139034  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:16.139057  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:16.139065  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:16.139074  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:16.139082  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:16.139089  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:16.139097  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:16.139108  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.140873  110500 cri.go:87] found id: "8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	I0114 10:32:16.140897  110500 cri.go:87] found id: "dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653"
	I0114 10:32:16.140903  110500 cri.go:87] found id: "fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642"
	I0114 10:32:16.140909  110500 cri.go:87] found id: "6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2"
	I0114 10:32:16.140915  110500 cri.go:87] found id: "1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028"
	I0114 10:32:16.140925  110500 cri.go:87] found id: "9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:32:16.140940  110500 cri.go:87] found id: "72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf"
	I0114 10:32:16.140949  110500 cri.go:87] found id: "1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2"
	I0114 10:32:16.140963  110500 cri.go:87] found id: ""
	I0114 10:32:16.140974  110500 cri.go:232] Stopping containers: [8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653 fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2]
	I0114 10:32:16.141047  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:32:16.143873  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:32:16.143954  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653 fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.164186  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:16.164545  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:16.164961  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:16.165442  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:16.165866  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:16.166173  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:16.166530  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:16.166912  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.168495  110500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:32:16.178319  110500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:32:16.184529  110500 command_runner.go:130] > -rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	I0114 10:32:16.184551  110500 command_runner.go:130] > -rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.184560  110500 command_runner.go:130] > -rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	I0114 10:32:16.184578  110500 command_runner.go:130] > -rw------- 1 root root 5604 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	I0114 10:32:16.185073  110500 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	
	I0114 10:32:16.185115  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:32:16.191129  110500 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 10:32:16.191775  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:32:16.197622  110500 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 10:32:16.198190  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.204644  110500 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.204691  110500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.211037  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:32:16.217158  110500 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.217202  110500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 10:32:16.223266  110500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:32:16.229757  110500 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:32:16.229774  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:16.267698  110500 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:32:16.267727  110500 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0114 10:32:16.267829  110500 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0114 10:32:16.268005  110500 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 10:32:16.268198  110500 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0114 10:32:16.268305  110500 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0114 10:32:16.268411  110500 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0114 10:32:16.268622  110500 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0114 10:32:16.268811  110500 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0114 10:32:16.269005  110500 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 10:32:16.269151  110500 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 10:32:16.269270  110500 command_runner.go:130] > [certs] Using the existing "sa" key
	I0114 10:32:16.271593  110500 command_runner.go:130] ! W0114 10:32:16.262790     715 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:16.271629  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:16.308452  110500 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:32:16.591661  110500 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0114 10:32:16.834807  110500 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0114 10:32:16.917085  110500 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:32:16.963606  110500 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:32:16.966257  110500 command_runner.go:130] ! W0114 10:32:16.303745     726 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:16.966297  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.014855  110500 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:32:17.015614  110500 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:32:17.015700  110500 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 10:32:17.097994  110500 command_runner.go:130] ! W0114 10:32:16.998756     739 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.098089  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.134582  110500 command_runner.go:130] ! W0114 10:32:17.134094     774 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.147442  110500 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:32:17.147476  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:32:17.147487  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:32:17.147503  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:32:17.147521  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.236640  110500 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:32:17.242814  110500 command_runner.go:130] ! W0114 10:32:17.230809     792 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.242849  110500 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:32:17.242894  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:17.752564  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:18.252426  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:18.260999  110500 command_runner.go:130] > 1113
	I0114 10:32:18.261657  110500 api_server.go:71] duration metric: took 1.01880732s to wait for apiserver process to appear ...
	I0114 10:32:18.261681  110500 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:32:18.261693  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:21.029950  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 10:32:21.029985  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 10:32:21.530625  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:21.535017  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:32:21.535038  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:32:22.030583  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:22.034640  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:32:22.034667  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:32:22.530186  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:22.535299  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:32:22.535363  110500 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0114 10:32:22.535370  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:22.535378  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:22.535387  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:22.542430  110500 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0114 10:32:22.542456  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:22.542463  110500 round_trippers.go:580]     Audit-Id: 1aad19f3-6767-4611-a5ba-372dd35e9aaa
	I0114 10:32:22.542469  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:22.542478  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:22.542486  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:22.542495  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:22.542501  110500 round_trippers.go:580]     Content-Length: 263
	I0114 10:32:22.542510  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 10:32:22.542548  110500 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 10:32:22.542642  110500 api_server.go:140] control plane version: v1.25.3
	I0114 10:32:22.542659  110500 api_server.go:130] duration metric: took 4.280973232s to wait for apiserver health ...
	I0114 10:32:22.542670  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:32:22.542681  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:32:22.544760  110500 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 10:32:22.546388  110500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:32:22.549910  110500 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 10:32:22.549962  110500 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 10:32:22.549975  110500 command_runner.go:130] > Device: 34h/52d	Inode: 565966      Links: 1
	I0114 10:32:22.549983  110500 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:32:22.549994  110500 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:32:22.550002  110500 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:32:22.550010  110500 command_runner.go:130] > Change: 2023-01-14 10:06:59.488187836 +0000
	I0114 10:32:22.550015  110500 command_runner.go:130] >  Birth: -
	I0114 10:32:22.550073  110500 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:32:22.550086  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:32:22.563145  110500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:32:23.556622  110500 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:32:23.558324  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:32:23.559964  110500 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 10:32:23.572522  110500 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 10:32:23.576291  110500 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.013116951s)
	I0114 10:32:23.576318  110500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:32:23.576411  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:23.576418  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.576426  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.576434  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.580021  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:23.580051  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.580062  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.580070  110500 round_trippers.go:580]     Audit-Id: d0301af9-af93-4972-a603-26d225a78b49
	I0114 10:32:23.580078  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.580086  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.580101  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.580109  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.580814  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"684"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84105 chars]
	I0114 10:32:23.586341  110500 system_pods.go:59] 12 kube-system pods found
	I0114 10:32:23.586386  110500 system_pods.go:61] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:23.586402  110500 system_pods.go:61] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 10:32:23.586417  110500 system_pods.go:61] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:23.586424  110500 system_pods.go:61] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:23.586434  110500 system_pods.go:61] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0114 10:32:23.586442  110500 system_pods.go:61] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:23.586451  110500 system_pods.go:61] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 10:32:23.586463  110500 system_pods.go:61] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:23.586473  110500 system_pods.go:61] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:23.586480  110500 system_pods.go:61] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:23.586490  110500 system_pods.go:61] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:23.586499  110500 system_pods.go:61] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running
	I0114 10:32:23.586505  110500 system_pods.go:74] duration metric: took 10.181942ms to wait for pod list to return data ...
	I0114 10:32:23.586518  110500 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:32:23.586586  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:23.586597  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.586606  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.586613  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.588779  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.588796  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.588806  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.588815  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.588826  110500 round_trippers.go:580]     Audit-Id: 6ef374f5-6c43-4398-b878-dabcf026fa21
	I0114 10:32:23.588834  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.588845  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.588857  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.589115  110500 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"684"},"items":[{"metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15962 chars]
	I0114 10:32:23.589921  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589939  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589950  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589954  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589958  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589961  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589966  110500 node_conditions.go:105] duration metric: took 3.442775ms to run NodePressure ...
	I0114 10:32:23.589987  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:23.691341  110500 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0114 10:32:23.734463  110500 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0114 10:32:23.736834  110500 command_runner.go:130] ! W0114 10:32:23.629296    1801 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:23.736875  110500 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 10:32:23.736963  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0114 10:32:23.736973  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.736985  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.736994  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.739456  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.739481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.739491  110500 round_trippers.go:580]     Audit-Id: c46d33b0-2b93-4009-a7b0-a83f39889d32
	I0114 10:32:23.739500  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.739509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.739522  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.739534  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.739546  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.739840  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"686"},"items":[{"metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30422 chars]
	I0114 10:32:23.740800  110500 kubeadm.go:778] kubelet initialised
	I0114 10:32:23.740814  110500 kubeadm.go:779] duration metric: took 3.928883ms waiting for restarted kubelet to initialise ...
	I0114 10:32:23.740821  110500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:23.740868  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:23.740876  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.740885  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.740894  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.743447  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.743463  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.743469  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.743485  110500 round_trippers.go:580]     Audit-Id: c6bded4f-4aa4-42be-a4e4-20ebe0546a46
	I0114 10:32:23.743493  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.743500  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.743509  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.743518  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.744192  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"686"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84105 chars]
	I0114 10:32:23.746598  110500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.746657  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:23.746664  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.746672  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.746681  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.748217  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.748238  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.748245  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.748253  110500 round_trippers.go:580]     Audit-Id: d1d08fdc-7445-4898-b7eb-6476beda912d
	I0114 10:32:23.748262  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.748274  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.748286  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.748294  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.748395  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6329 chars]
	I0114 10:32:23.748815  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:23.748828  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.748835  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.748845  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.750211  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.750225  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.750232  110500 round_trippers.go:580]     Audit-Id: a95fc07c-b593-46bf-8f30-63ff02257647
	I0114 10:32:23.750240  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.750248  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.750257  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.750272  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.750283  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.750383  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:23.750658  110500 pod_ready.go:92] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:23.750671  110500 pod_ready.go:81] duration metric: took 4.054192ms waiting for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.750678  110500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.750715  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:23.750722  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.750729  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.750734  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.752197  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.752212  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.752220  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.752226  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.752231  110500 round_trippers.go:580]     Audit-Id: e9cb9640-9e23-42f8-94a1-c70e896a63a2
	I0114 10:32:23.752237  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.752246  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.752262  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.752376  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:23.752697  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:23.752709  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.752716  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.752722  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.754027  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.754047  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.754056  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.754064  110500 round_trippers.go:580]     Audit-Id: 83d20878-ef82-4af2-a8ed-28c1b8299d89
	I0114 10:32:23.754073  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.754085  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.754093  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.754103  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.754191  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:24.255300  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:24.255338  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.255348  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.255355  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.257483  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:24.257502  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.257509  110500 round_trippers.go:580]     Audit-Id: d39e3692-47b7-4e86-ada1-da6bc3a167a8
	I0114 10:32:24.257517  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.257526  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.257536  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.257548  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.257562  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.257672  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:24.258113  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:24.258127  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.258138  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.258147  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.259968  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.259991  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.260002  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.260012  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.260020  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.260026  110500 round_trippers.go:580]     Audit-Id: a0b0bcf0-c6b9-4f99-aedf-f72364dcfbaf
	I0114 10:32:24.260033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.260047  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.260220  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:24.754715  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:24.754736  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.754744  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.754750  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.756730  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.756773  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.756783  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.756791  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.756800  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.756811  110500 round_trippers.go:580]     Audit-Id: df0a6fa2-a310-40fd-976c-91df137ef1ec
	I0114 10:32:24.756823  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.756832  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.756952  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:24.757338  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:24.757350  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.757357  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.757363  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.758957  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.758976  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.758984  110500 round_trippers.go:580]     Audit-Id: d4a78893-a75c-47a5-9145-000537d9e421
	I0114 10:32:24.758993  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.759002  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.759013  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.759023  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.759039  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.759147  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.254722  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:25.254743  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.254751  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.254758  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.256711  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.256735  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.256747  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.256756  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.256765  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.256773  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.256785  110500 round_trippers.go:580]     Audit-Id: b0b46cf3-6548-4625-889c-7ab1f6b91f5f
	I0114 10:32:25.256797  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.256916  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:25.257361  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:25.257375  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.257382  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.257392  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.258984  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.259007  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.259013  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.259019  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.259027  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.259036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.259052  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.259060  110500 round_trippers.go:580]     Audit-Id: 055c2428-6dda-4feb-871a-f78137c59674
	I0114 10:32:25.259182  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.754723  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:25.754750  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.754758  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.754764  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.756888  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:25.756905  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.756912  110500 round_trippers.go:580]     Audit-Id: 5ead797e-e2a7-46ae-8b8e-71c88b5db5b4
	I0114 10:32:25.756917  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.756923  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.756932  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.756941  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.756975  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.757091  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:25.757536  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:25.757549  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.757558  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.757564  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.759116  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.759139  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.759149  110500 round_trippers.go:580]     Audit-Id: c137e88b-c781-4c64-bb12-5a1558b3c42d
	I0114 10:32:25.759158  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.759166  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.759173  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.759181  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.759186  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.759337  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.759698  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:26.255190  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:26.255211  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.255222  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.255230  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.257295  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:26.257321  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.257331  110500 round_trippers.go:580]     Audit-Id: 4118a105-2c79-4fa0-a2d9-f41e62a1936d
	I0114 10:32:26.257341  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.257349  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.257356  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.257361  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.257366  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.257473  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:26.257881  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:26.257893  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.257900  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.257906  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.259401  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:26.259416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.259422  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.259427  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.259434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.259443  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.259452  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.259463  110500 round_trippers.go:580]     Audit-Id: e165b0e4-85ed-42f7-8b3b-16d681c452ff
	I0114 10:32:26.259579  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:26.755171  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:26.755193  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.755204  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.755212  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.757390  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:26.757416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.757426  110500 round_trippers.go:580]     Audit-Id: 20047ef4-60e6-42c1-ac1d-2be32965f108
	I0114 10:32:26.757437  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.757445  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.757457  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.757470  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.757479  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.757601  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:26.757998  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:26.758011  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.758019  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.758025  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.759714  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:26.759735  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.759745  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.759750  110500 round_trippers.go:580]     Audit-Id: 4dee5e22-75a1-4049-a04f-14302d303af1
	I0114 10:32:26.759756  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.759764  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.759770  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.759779  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.759917  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.255456  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:27.255476  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.255485  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.255491  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.257395  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.257416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.257422  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.257428  110500 round_trippers.go:580]     Audit-Id: 8b584991-7a59-4031-a6ee-ed36b8d982da
	I0114 10:32:27.257433  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.257438  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.257444  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.257450  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.257554  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:27.257971  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:27.257985  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.257995  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.258004  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.259540  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.259560  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.259570  110500 round_trippers.go:580]     Audit-Id: c61cc11c-4f94-4185-85ee-04dcc2eaf2c6
	I0114 10:32:27.259579  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.259588  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.259599  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.259608  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.259619  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.259770  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.755355  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:27.755375  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.755388  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.755394  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.757497  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:27.757520  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.757531  110500 round_trippers.go:580]     Audit-Id: 3ae75ec2-a8ef-4917-afc8-ef9aa3d382cd
	I0114 10:32:27.757540  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.757549  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.757558  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.757578  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.757589  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.757717  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:27.758225  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:27.758239  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.758250  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.758260  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.759856  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.759873  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.759881  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.759886  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.759891  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.759897  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.759904  110500 round_trippers.go:580]     Audit-Id: 2167b85d-7304-4b18-982d-1ea14fbc5a03
	I0114 10:32:27.759909  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.760031  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.760332  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:28.255596  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:28.255615  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.255623  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.255629  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.257537  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.257560  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.257570  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.257578  110500 round_trippers.go:580]     Audit-Id: 03d4795a-bb14-49fe-8c24-291b677b4317
	I0114 10:32:28.257585  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.257593  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.257602  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.257611  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.257746  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:28.258130  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:28.258143  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.258153  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.258163  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.259792  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.259811  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.259821  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.259828  110500 round_trippers.go:580]     Audit-Id: 6e41dfe8-826e-4c61-9d51-16e6b88d0c61
	I0114 10:32:28.259836  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.259845  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.259854  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.259864  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.260018  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:28.755686  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:28.755715  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.755727  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.755738  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.757862  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:28.757882  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.757892  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.757899  110500 round_trippers.go:580]     Audit-Id: 09283aa4-913b-44ca-ac73-1a2c219fa6d2
	I0114 10:32:28.757907  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.757916  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.757924  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.757940  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.758111  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:28.758529  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:28.758541  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.758552  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.758561  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.760208  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.760232  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.760243  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.760252  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.760262  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.760275  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.760280  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.760286  110500 round_trippers.go:580]     Audit-Id: 79e70e8c-2b2b-460d-8f0d-0a3d61924cb6
	I0114 10:32:28.760370  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:29.254944  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:29.254966  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.254974  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.254980  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.256978  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.256998  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.257007  110500 round_trippers.go:580]     Audit-Id: 6e8024a5-8df6-485d-a1cc-c9a6aaec52b9
	I0114 10:32:29.257018  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.257029  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.257036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.257045  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.257060  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.257165  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:29.257549  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:29.257561  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.257569  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.257575  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.259222  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.259239  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.259248  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.259256  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.259264  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.259273  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.259286  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.259299  110500 round_trippers.go:580]     Audit-Id: e5165d7a-cea8-4461-96bf-7372805a0bad
	I0114 10:32:29.259430  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:29.755550  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:29.755572  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.755582  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.755591  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.757508  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.757531  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.757541  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.757549  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.757556  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.757565  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.757577  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.757590  110500 round_trippers.go:580]     Audit-Id: 165f76d6-f2b4-485c-a16c-536de0a4d900
	I0114 10:32:29.757704  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:29.758097  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:29.758111  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.758121  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.758130  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.759815  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.759840  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.759851  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.759860  110500 round_trippers.go:580]     Audit-Id: 2334a4fc-ab40-46b8-9a67-f8c6ee0a221f
	I0114 10:32:29.759868  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.759881  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.759890  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.759901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.760019  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:30.255612  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:30.255638  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.255649  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.255657  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.257668  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.257686  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.257694  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.257700  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.257705  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.257713  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.257721  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.257739  110500 round_trippers.go:580]     Audit-Id: 8b29845a-1bac-4410-8ec6-f6d50426573e
	I0114 10:32:30.257848  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:30.258249  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:30.258262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.258269  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.258277  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.260013  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.260035  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.260046  110500 round_trippers.go:580]     Audit-Id: da3743d4-cd4c-437a-b1cc-e53d3ce1d217
	I0114 10:32:30.260055  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.260069  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.260078  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.260090  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.260101  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.260216  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:30.260612  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:30.754777  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:30.754797  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.754807  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.754814  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.756907  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:30.756932  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.756943  110500 round_trippers.go:580]     Audit-Id: 0e68a683-03ba-4f20-9b66-357e1ebd6f7a
	I0114 10:32:30.756952  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.756964  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.756975  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.756985  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.756997  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.757115  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:30.757633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:30.757652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.757663  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.757674  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.759328  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.759350  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.759360  110500 round_trippers.go:580]     Audit-Id: 79b5c42f-d5ee-473f-9ef2-7ddbd23be82b
	I0114 10:32:30.759369  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.759380  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.759393  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.759402  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.759415  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.759552  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:31.255085  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:31.255105  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.255125  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.255133  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.257142  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.257166  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.257173  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.257179  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.257188  110500 round_trippers.go:580]     Audit-Id: 5b02cf04-91ee-4c3f-b2b9-a589aad94bae
	I0114 10:32:31.257196  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.257207  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.257224  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.257348  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:31.257765  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:31.257780  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.257791  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.257808  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.259361  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.259385  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.259394  110500 round_trippers.go:580]     Audit-Id: 7c3adabb-9744-4a02-b2bc-e2aec5b89a83
	I0114 10:32:31.259403  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.259412  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.259424  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.259435  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.259447  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.259532  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:31.755109  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:31.755130  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.755138  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.755145  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.757374  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:31.757401  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.757411  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.757418  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.757425  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.757432  110500 round_trippers.go:580]     Audit-Id: c0266278-a5dd-4e1b-af76-c45306fd69fe
	I0114 10:32:31.757440  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.757450  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.757613  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:31.757997  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:31.758009  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.758016  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.758022  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.759699  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.759725  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.759735  110500 round_trippers.go:580]     Audit-Id: 65120ef6-edb7-4836-9391-4ac8e7c2ed70
	I0114 10:32:31.759745  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.759758  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.759770  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.759783  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.759792  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.759887  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.255485  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:32.255506  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.255517  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.255525  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.257491  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.257519  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.257530  110500 round_trippers.go:580]     Audit-Id: f09b3877-d0df-4034-8fb4-90ce1a1bd2de
	I0114 10:32:32.257540  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.257552  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.257561  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.257573  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.257579  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.257716  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:32.258163  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:32.258178  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.258185  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.258192  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.259808  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.259830  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.259840  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.259849  110500 round_trippers.go:580]     Audit-Id: 6127838b-d1d4-40db-901d-1116a8eeaaae
	I0114 10:32:32.259862  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.259871  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.259884  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.259893  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.260004  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.755587  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:32.755609  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.755618  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.755625  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.757750  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:32.757772  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.757782  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.757791  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.757801  110500 round_trippers.go:580]     Audit-Id: fd727de4-8b43-4307-abce-74fef66a240a
	I0114 10:32:32.757812  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.757824  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.757833  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.757927  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:32.758367  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:32.758380  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.758387  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.758394  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.760252  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.760275  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.760285  110500 round_trippers.go:580]     Audit-Id: 1ba406ab-1408-47f3-9f7b-d83a23d1d995
	I0114 10:32:32.760294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.760303  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.760311  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.760320  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.760333  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.760453  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.760759  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:33.255183  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:33.255203  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.255216  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.255227  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.256935  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:33.256955  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.256965  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.256974  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.256983  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.256997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.257006  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.257015  110500 round_trippers.go:580]     Audit-Id: f707a20e-84c0-4220-8de1-a410d53bbbd2
	I0114 10:32:33.257114  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:33.257633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:33.257652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.257664  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.257673  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.261003  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:33.261025  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.261038  110500 round_trippers.go:580]     Audit-Id: 55c5ec23-8f33-46de-a5f1-ca14186a4547
	I0114 10:32:33.261047  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.261055  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.261067  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.261079  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.261089  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.261197  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:33.754764  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:33.754788  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.754796  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.754802  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.756951  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:33.756974  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.756990  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.756999  110500 round_trippers.go:580]     Audit-Id: 3b0d4e77-45b8-44ee-b4f3-262610cdf21f
	I0114 10:32:33.757012  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.757024  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.757036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.757049  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.757166  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:33.757602  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:33.757615  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.757622  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.757628  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.759206  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:33.759227  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.759236  110500 round_trippers.go:580]     Audit-Id: bb4508cd-9c1a-4d46-a92a-ce006889479a
	I0114 10:32:33.759246  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.759255  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.759267  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.759279  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.759292  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.759396  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:34.254745  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:34.254766  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.254774  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.254780  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.256865  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:34.256890  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.256901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.256910  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.256918  110500 round_trippers.go:580]     Audit-Id: 19099497-60b0-4de7-a3a8-250a4c3230ae
	I0114 10:32:34.256927  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.256937  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.256949  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.257056  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:34.257478  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:34.257492  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.257500  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.257507  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.259093  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:34.259116  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.259125  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.259135  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.259147  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.259156  110500 round_trippers.go:580]     Audit-Id: 652cb597-28f1-4a27-a56c-a6bd5d19765f
	I0114 10:32:34.259168  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.259179  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.259278  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:34.754848  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:34.754881  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.754890  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.754897  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.757014  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:34.757041  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.757052  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.757061  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.757067  110500 round_trippers.go:580]     Audit-Id: c80617eb-7d4c-4c6e-9079-9c6bcb1f5c04
	I0114 10:32:34.757074  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.757083  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.757093  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.757298  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:34.757687  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:34.757699  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.757707  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.757713  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.759337  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:34.759354  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.759360  110500 round_trippers.go:580]     Audit-Id: 7b4eee6e-6983-4da2-a62a-b725116b5647
	I0114 10:32:34.759366  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.759371  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.759379  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.759388  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.759399  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.759513  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:35.254779  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:35.254804  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.254816  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.254825  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.256971  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:35.256996  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.257004  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.257010  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.257016  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.257025  110500 round_trippers.go:580]     Audit-Id: 0d3539bd-6778-49cc-96e9-9bad1309f553
	I0114 10:32:35.257033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.257045  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.257162  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:35.257610  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:35.257623  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.257631  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.257637  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.259224  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:35.259240  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.259247  110500 round_trippers.go:580]     Audit-Id: 0ae81ca5-2444-4e02-9a52-f90078681427
	I0114 10:32:35.259255  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.259263  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.259275  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.259288  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.259299  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.259411  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:35.259731  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:35.754968  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:35.754991  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.754999  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.755005  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.757108  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:35.757134  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.757141  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.757147  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.757153  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.757158  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.757164  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.757169  110500 round_trippers.go:580]     Audit-Id: 346069ca-82c5-48d2-9563-9b2ddbf48dc1
	I0114 10:32:35.757280  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:35.757685  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:35.757698  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.757706  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.757712  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.759339  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:35.759361  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.759371  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.759380  110500 round_trippers.go:580]     Audit-Id: 4f47e4ad-f84c-4e00-b267-67aaa09518dc
	I0114 10:32:35.759390  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.759403  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.759408  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.759420  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.759543  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.255483  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:36.255502  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.255510  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.255516  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.257488  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.257509  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.257516  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.257521  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.257527  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.257532  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.257537  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.257542  110500 round_trippers.go:580]     Audit-Id: 686cc3bb-197f-4b71-8669-c9c811866ccb
	I0114 10:32:36.257662  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"777","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6035 chars]
	I0114 10:32:36.258125  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.258140  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.258151  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.258161  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.259632  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.259648  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.259654  110500 round_trippers.go:580]     Audit-Id: 88ae3e7f-6045-49d8-bb2d-357c67401973
	I0114 10:32:36.259660  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.259667  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.259691  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.259703  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.259722  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.259868  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.260159  110500 pod_ready.go:92] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.260181  110500 pod_ready.go:81] duration metric: took 12.509495988s waiting for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.260205  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.260261  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102822
	I0114 10:32:36.260270  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.260282  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.260297  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.261791  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.261809  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.261818  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.261826  110500 round_trippers.go:580]     Audit-Id: 5634a153-6fb5-404f-a137-f53afacc1245
	I0114 10:32:36.261834  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.261846  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.261855  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.261869  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.262026  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102822","namespace":"kube-system","uid":"c74a88c9-d603-4a80-a194-de75c8d0a3a5","resourceVersion":"770","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.mirror":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.seen":"2023-01-14T10:28:42.123458577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8421 chars]
	I0114 10:32:36.262461  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.262472  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.262479  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.262487  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.263929  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.263945  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.263954  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.263962  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.263970  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.263982  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.263995  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.264005  110500 round_trippers.go:580]     Audit-Id: e55d109e-705e-470a-9744-b5583c449686
	I0114 10:32:36.264139  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.264398  110500 pod_ready.go:92] pod "kube-apiserver-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.264408  110500 pod_ready.go:81] duration metric: took 4.192145ms waiting for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.264418  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.264453  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102822
	I0114 10:32:36.264460  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.264467  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.264474  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.266038  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.266055  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.266064  110500 round_trippers.go:580]     Audit-Id: 47324d68-80de-4990-a022-d7d52a3fcbf0
	I0114 10:32:36.266071  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.266079  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.266088  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.266097  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.266107  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.266215  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102822","namespace":"kube-system","uid":"85c8a264-96f3-4fcf-affd-917b94bdd177","resourceVersion":"774","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.mirror":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.seen":"2023-01-14T10:28:42.123460297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7996 chars]
	I0114 10:32:36.266578  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.266591  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.266601  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.266611  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.267963  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.267978  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.267984  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.267990  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.267996  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.268005  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.268017  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.268028  110500 round_trippers.go:580]     Audit-Id: 93002af6-97df-471f-95fa-3d5e668e2fca
	I0114 10:32:36.268120  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.268373  110500 pod_ready.go:92] pod "kube-controller-manager-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.268384  110500 pod_ready.go:81] duration metric: took 3.959272ms waiting for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.268394  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.268427  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d5n6
	I0114 10:32:36.268434  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.268441  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.268447  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.269931  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.269946  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.269953  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.269959  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.269964  110500 round_trippers.go:580]     Audit-Id: 51ab0589-c66b-4eab-b16d-8834f2151d9a
	I0114 10:32:36.269972  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.269981  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.269997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.270077  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4d5n6","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dba561b-e827-4a6e-afd9-11c68b7e4447","resourceVersion":"471","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5522 chars]
	I0114 10:32:36.270392  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:32:36.270402  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.270408  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.270414  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.271787  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.271805  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.271811  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.271817  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.271823  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.271828  110500 round_trippers.go:580]     Audit-Id: 126c3c4b-b18f-441c-b869-90363ea3dee2
	I0114 10:32:36.271833  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.271838  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.271935  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4","resourceVersion":"549","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1" [truncated 4430 chars]
	I0114 10:32:36.272154  110500 pod_ready.go:92] pod "kube-proxy-4d5n6" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.272165  110500 pod_ready.go:81] duration metric: took 3.765849ms waiting for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.272172  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.272210  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:36.272218  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.272224  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.272230  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.273735  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.273750  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.273760  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.273769  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.273784  110500 round_trippers.go:580]     Audit-Id: e57ac6dc-00b8-4e56-8601-76f0d7bbb22c
	I0114 10:32:36.273797  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.273809  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.273819  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.273928  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bzd24","generateName":"kube-proxy-","namespace":"kube-system","uid":"3191f786-4823-486a-90e6-be1b1180c23a","resourceVersion":"660","creationTimestamp":"2023-01-14T10:30:20Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 10:32:36.274311  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:36.274329  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.274336  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.274342  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.275632  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.275645  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.275652  110500 round_trippers.go:580]     Audit-Id: 8bcc2f0b-ac50-4161-9c26-9bb0097ebfb8
	I0114 10:32:36.275657  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.275663  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.275668  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.275697  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.275708  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.275877  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m03","uid":"7fb9d125-0cce-4853-b9a9-9348e20e7ae7","resourceVersion":"674","creationTimestamp":"2023-01-14T10:31:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu
mes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{"." [truncated 4248 chars]
	I0114 10:32:36.276181  110500 pod_ready.go:92] pod "kube-proxy-bzd24" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.276195  110500 pod_ready.go:81] duration metric: took 4.017618ms waiting for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.276205  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.455534  110500 request.go:614] Waited for 179.275116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:36.455598  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:36.455602  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.455629  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.455639  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.457708  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.457736  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.457747  110500 round_trippers.go:580]     Audit-Id: 54c225ad-ae45-474b-b73c-0a4296e75b17
	I0114 10:32:36.457756  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.457763  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.457775  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.457794  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.457803  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.457939  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlcll","generateName":"kube-proxy-","namespace":"kube-system","uid":"91e05737-5cbf-404c-8b7c-75045f584885","resourceVersion":"718","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5727 chars]
	I0114 10:32:36.655705  110500 request.go:614] Waited for 197.282889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.655766  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.655771  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.655779  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.655786  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.658009  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.658030  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.658044  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.658052  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.658061  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.658068  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.658077  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.658088  110500 round_trippers.go:580]     Audit-Id: 9962cd09-553e-4e94-9f81-8a21b65473fa
	I0114 10:32:36.658176  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.658482  110500 pod_ready.go:92] pod "kube-proxy-qlcll" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.658493  110500 pod_ready.go:81] duration metric: took 382.279914ms waiting for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.658501  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.855574  110500 request.go:614] Waited for 196.992212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:36.855633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:36.855641  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.855652  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.855660  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.857870  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.857889  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.857896  110500 round_trippers.go:580]     Audit-Id: 9d2c7405-2f70-45e4-b3b5-c264d1b3fc4f
	I0114 10:32:36.857902  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.857907  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.857913  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.857918  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.857924  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.858061  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102822","namespace":"kube-system","uid":"63ee442e-88de-44d9-8512-98c56f1b4942","resourceVersion":"725","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.mirror":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.seen":"2023-01-14T10:28:42.123461701Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4878 chars]
	I0114 10:32:37.055720  110500 request.go:614] Waited for 197.266354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.055769  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.055774  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.055787  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.055797  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.057958  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.057979  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.057988  110500 round_trippers.go:580]     Audit-Id: 1580fd75-e4e0-4a4a-9791-aff04e65f15c
	I0114 10:32:37.057993  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.058002  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.058008  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.058015  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.058021  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.058137  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:37.058454  110500 pod_ready.go:92] pod "kube-scheduler-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:37.058468  110500 pod_ready.go:81] duration metric: took 399.960714ms waiting for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:37.058477  110500 pod_ready.go:38] duration metric: took 13.317646399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:37.058494  110500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:32:37.065465  110500 command_runner.go:130] > -16
	I0114 10:32:37.065504  110500 ops.go:34] apiserver oom_adj: -16
	I0114 10:32:37.065514  110500 kubeadm.go:631] restartCluster took 23.987780678s
	I0114 10:32:37.065526  110500 kubeadm.go:398] StartCluster complete in 24.028830611s
	I0114 10:32:37.065550  110500 settings.go:142] acquiring lock: {Name:mk1c1a895c03873155a8c7da5f3762b351f9952d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:37.065670  110500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.066259  110500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:37.066720  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.066964  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:37.067294  110500 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0114 10:32:37.067309  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.067324  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.067333  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.069540  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.069555  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.069562  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.069567  110500 round_trippers.go:580]     Content-Length: 291
	I0114 10:32:37.069573  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.069578  110500 round_trippers.go:580]     Audit-Id: 72d12c09-7a9f-482a-ba0b-2b59f789418c
	I0114 10:32:37.069583  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.069588  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.069594  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.069612  110500 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a0ae11c7-3256-4ef8-a0cd-ff11f2de358a","resourceVersion":"753","creationTimestamp":"2023-01-14T10:28:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0114 10:32:37.069762  110500 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-102822" rescaled to 1
	I0114 10:32:37.069822  110500 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:32:37.072149  110500 out.go:177] * Verifying Kubernetes components...
	I0114 10:32:37.069850  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 10:32:37.069869  110500 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0114 10:32:37.070096  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:37.073566  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:32:37.073588  110500 addons.go:65] Setting storage-provisioner=true in profile "multinode-102822"
	I0114 10:32:37.073605  110500 addons.go:65] Setting default-storageclass=true in profile "multinode-102822"
	I0114 10:32:37.073612  110500 addons.go:227] Setting addon storage-provisioner=true in "multinode-102822"
	W0114 10:32:37.073620  110500 addons.go:236] addon storage-provisioner should already be in state true
	I0114 10:32:37.073683  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:32:37.073623  110500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-102822"
	I0114 10:32:37.073995  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.074114  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.083738  110500 node_ready.go:35] waiting up to 6m0s for node "multinode-102822" to be "Ready" ...
	I0114 10:32:37.101957  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.102248  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:37.104720  110500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:32:37.102705  110500 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0114 10:32:37.106493  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.106510  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.106521  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.106628  110500 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:32:37.106646  110500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 10:32:37.106697  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:37.108695  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.108732  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.108744  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.108754  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.108763  110500 round_trippers.go:580]     Content-Length: 1273
	I0114 10:32:37.108775  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.108784  110500 round_trippers.go:580]     Audit-Id: 0ea79256-b1ab-4ac5-8466-82be87c881b8
	I0114 10:32:37.108794  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.108800  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.108831  110500 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0114 10:32:37.109300  110500 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 10:32:37.109347  110500 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0114 10:32:37.109351  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.109359  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.109368  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.109374  110500 round_trippers.go:473]     Content-Type: application/json
	I0114 10:32:37.112948  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:37.112965  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.112974  110500 round_trippers.go:580]     Audit-Id: bc58afbd-603f-4591-8a19-d9db28fda25c
	I0114 10:32:37.112983  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.112992  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.113004  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.113014  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.113024  110500 round_trippers.go:580]     Content-Length: 1220
	I0114 10:32:37.113030  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.113076  110500 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 10:32:37.113223  110500 addons.go:227] Setting addon default-storageclass=true in "multinode-102822"
	W0114 10:32:37.113242  110500 addons.go:236] addon default-storageclass should already be in state true
	I0114 10:32:37.113268  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:32:37.113633  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.134182  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:37.140584  110500 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 10:32:37.140618  110500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 10:32:37.140684  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:37.142257  110500 command_runner.go:130] > apiVersion: v1
	I0114 10:32:37.142279  110500 command_runner.go:130] > data:
	I0114 10:32:37.142286  110500 command_runner.go:130] >   Corefile: |
	I0114 10:32:37.142291  110500 command_runner.go:130] >     .:53 {
	I0114 10:32:37.142298  110500 command_runner.go:130] >         errors
	I0114 10:32:37.142306  110500 command_runner.go:130] >         health {
	I0114 10:32:37.142312  110500 command_runner.go:130] >            lameduck 5s
	I0114 10:32:37.142318  110500 command_runner.go:130] >         }
	I0114 10:32:37.142329  110500 command_runner.go:130] >         ready
	I0114 10:32:37.142340  110500 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0114 10:32:37.142350  110500 command_runner.go:130] >            pods insecure
	I0114 10:32:37.142361  110500 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0114 10:32:37.142372  110500 command_runner.go:130] >            ttl 30
	I0114 10:32:37.142378  110500 command_runner.go:130] >         }
	I0114 10:32:37.142383  110500 command_runner.go:130] >         prometheus :9153
	I0114 10:32:37.142396  110500 command_runner.go:130] >         hosts {
	I0114 10:32:37.142408  110500 command_runner.go:130] >            192.168.58.1 host.minikube.internal
	I0114 10:32:37.142415  110500 command_runner.go:130] >            fallthrough
	I0114 10:32:37.142424  110500 command_runner.go:130] >         }
	I0114 10:32:37.142432  110500 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0114 10:32:37.142443  110500 command_runner.go:130] >            max_concurrent 1000
	I0114 10:32:37.142452  110500 command_runner.go:130] >         }
	I0114 10:32:37.142458  110500 command_runner.go:130] >         cache 30
	I0114 10:32:37.142467  110500 command_runner.go:130] >         loop
	I0114 10:32:37.142476  110500 command_runner.go:130] >         reload
	I0114 10:32:37.142486  110500 command_runner.go:130] >         loadbalance
	I0114 10:32:37.142492  110500 command_runner.go:130] >     }
	I0114 10:32:37.142501  110500 command_runner.go:130] > kind: ConfigMap
	I0114 10:32:37.142507  110500 command_runner.go:130] > metadata:
	I0114 10:32:37.142520  110500 command_runner.go:130] >   creationTimestamp: "2023-01-14T10:28:42Z"
	I0114 10:32:37.142530  110500 command_runner.go:130] >   name: coredns
	I0114 10:32:37.142540  110500 command_runner.go:130] >   namespace: kube-system
	I0114 10:32:37.142549  110500 command_runner.go:130] >   resourceVersion: "369"
	I0114 10:32:37.142554  110500 command_runner.go:130] >   uid: 348659ae-af6c-4ae1-ba1c-2468636d5cd9
	I0114 10:32:37.142667  110500 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 10:32:37.166923  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:37.232895  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:32:37.255727  110500 request.go:614] Waited for 171.896659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.255799  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.255808  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.255816  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.255826  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.257966  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.257995  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.258005  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.258014  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.258024  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.258037  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.258048  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.258056  110500 round_trippers.go:580]     Audit-Id: 26a9ffda-d3c6-41ff-9b64-02b9f68339e0
	I0114 10:32:37.258212  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:37.258658  110500 node_ready.go:49] node "multinode-102822" has status "Ready":"True"
	I0114 10:32:37.258684  110500 node_ready.go:38] duration metric: took 174.906774ms waiting for node "multinode-102822" to be "Ready" ...
	I0114 10:32:37.258695  110500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:37.261672  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 10:32:37.455739  110500 request.go:614] Waited for 196.934939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:37.455812  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:37.455819  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.455845  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.455855  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.459944  110500 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 10:32:37.459977  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.459987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.459996  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.460006  110500 round_trippers.go:580]     Audit-Id: 9ef79377-2b98-4dcf-b71e-74e70cc74bad
	I0114 10:32:37.460014  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.460024  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.460035  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.461304  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84721 chars]
	I0114 10:32:37.465016  110500 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:37.474891  110500 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0114 10:32:37.476850  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0114 10:32:37.478813  110500 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 10:32:37.480672  110500 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 10:32:37.521681  110500 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0114 10:32:37.531444  110500 command_runner.go:130] > pod/storage-provisioner configured
	I0114 10:32:37.535193  110500 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0114 10:32:37.537991  110500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 10:32:37.539317  110500 addons.go:488] enableAddons completed in 469.451083ms
	I0114 10:32:37.656457  110500 request.go:614] Waited for 191.361178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:37.656516  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:37.656521  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.656528  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.656535  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.658982  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.659003  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.659010  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.659016  110500 round_trippers.go:580]     Audit-Id: 594f46bb-9c2a-47db-b0bd-2919bd22e370
	I0114 10:32:37.659022  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.659028  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.659035  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.659043  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.659176  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:37.855996  110500 request.go:614] Waited for 196.354517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.856061  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.856072  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.856083  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.856096  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.858291  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.858314  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.858321  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.858327  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.858332  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.858337  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.858343  110500 round_trippers.go:580]     Audit-Id: 5b4e38d1-702e-4a2c-b31b-d2ebda836842
	I0114 10:32:37.858350  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.858523  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:38.359596  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:38.359618  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.359626  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.359633  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.361829  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:38.361851  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.361862  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.361871  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.361880  110500 round_trippers.go:580]     Audit-Id: 4553d3a6-d5cb-414b-8f19-6e8030cb3318
	I0114 10:32:38.361891  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.361901  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.361912  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.362088  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:38.362557  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:38.362569  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.362576  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.362582  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.364247  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:38.364267  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.364277  110500 round_trippers.go:580]     Audit-Id: 8e182689-b157-467d-aa0f-9d9b888d9608
	I0114 10:32:38.364294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.364306  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.364321  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.364331  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.364343  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.364481  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:38.859969  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:38.859993  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.860001  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.860007  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.862084  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:38.862106  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.862116  110500 round_trippers.go:580]     Audit-Id: b5d68d28-294d-4edc-aed1-a7efefc5a6a7
	I0114 10:32:38.862124  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.862131  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.862138  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.862147  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.862156  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.862306  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:38.862890  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:38.862906  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.862915  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.862922  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.864596  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:38.864618  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.864628  110500 round_trippers.go:580]     Audit-Id: 97ff09cc-b23a-4680-b752-7f9598de1f65
	I0114 10:32:38.864635  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.864640  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.864648  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.864654  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.864660  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.864836  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.359064  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:39.359088  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.359097  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.359104  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.361396  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:39.361421  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.361433  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.361442  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.361452  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.361468  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.361477  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.361488  110500 round_trippers.go:580]     Audit-Id: 81698645-fa16-4748-8e8b-746b7500c0b0
	I0114 10:32:39.361597  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:39.362018  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:39.362029  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.362036  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.362042  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.363706  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:39.363724  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.363731  110500 round_trippers.go:580]     Audit-Id: 06179dbb-c491-4fdf-9b46-8b57c30a2a02
	I0114 10:32:39.363736  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.363743  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.363751  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.363762  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.363775  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.363902  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.859598  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:39.859624  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.859633  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.859639  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.861841  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:39.861862  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.861869  110500 round_trippers.go:580]     Audit-Id: 5283f801-29b4-4678-bdb3-c59dd8c322ae
	I0114 10:32:39.861875  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.861891  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.861901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.861915  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.861924  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.862110  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:39.862591  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:39.862647  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.862666  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.862677  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.864481  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:39.864498  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.864508  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.864516  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.864524  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.864533  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.864547  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.864556  110500 round_trippers.go:580]     Audit-Id: 55799b22-9268-4877-b94b-a33177d8cdeb
	I0114 10:32:39.864734  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.865116  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:40.359224  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:40.359246  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.359254  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.359261  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.361570  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:40.361599  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.361606  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.361613  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.361619  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.361625  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.361633  110500 round_trippers.go:580]     Audit-Id: fab59685-da79-42f1-9658-97a80bf226a9
	I0114 10:32:40.361638  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.361768  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:40.362231  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:40.362245  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.362253  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.362259  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.364061  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.364077  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.364084  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.364091  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.364100  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.364111  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.364120  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.364126  110500 round_trippers.go:580]     Audit-Id: 43f7e97f-897f-4c5b-b9e7-c2e06b9b42f4
	I0114 10:32:40.364245  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:40.859884  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:40.859905  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.859913  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.859919  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.861912  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.861938  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.861949  110500 round_trippers.go:580]     Audit-Id: 2c9c969f-0642-4f26-bf0a-d2e8bc6a68ed
	I0114 10:32:40.861959  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.861969  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.861978  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.861990  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.862001  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.862133  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:40.862577  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:40.862590  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.862597  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.862605  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.864355  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.864375  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.864385  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.864393  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.864406  110500 round_trippers.go:580]     Audit-Id: 1820abbb-3ffb-4314-afa4-4789a3f8b5fb
	I0114 10:32:40.864412  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.864419  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.864425  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.864542  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:41.359096  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:41.359121  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.359132  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.359141  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.361299  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.361321  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.361328  110500 round_trippers.go:580]     Audit-Id: 034170e7-b7e3-4243-8ab4-133db6e98d26
	I0114 10:32:41.361334  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.361340  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.361346  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.361367  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.361375  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.361529  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:41.361977  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:41.361989  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.361996  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.362006  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.364080  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.364105  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.364114  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.364123  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.364130  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.364137  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.364145  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.364158  110500 round_trippers.go:580]     Audit-Id: 6c30d9ea-d002-4067-b3f3-a45d3334319b
	I0114 10:32:41.364280  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:41.859939  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:41.859959  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.859967  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.859974  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.862111  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.862128  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.862135  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.862140  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.862145  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.862151  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.862156  110500 round_trippers.go:580]     Audit-Id: 5fab01c3-76e3-4314-902a-ce2b17e158b7
	I0114 10:32:41.862161  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.862272  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:41.862698  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:41.862709  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.862716  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.862722  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.864461  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:41.864481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.864492  110500 round_trippers.go:580]     Audit-Id: 70012846-9522-4e4a-b077-d768ada29a5c
	I0114 10:32:41.864501  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.864509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.864521  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.864529  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.864538  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.864653  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:42.359169  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:42.359192  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.359200  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.359207  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.361441  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:42.361465  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.361475  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.361483  110500 round_trippers.go:580]     Audit-Id: 5c90d803-398e-4d0e-b154-3406a73293ce
	I0114 10:32:42.361492  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.361500  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.361509  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.361521  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.361656  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:42.362144  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:42.362159  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.362166  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.362172  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.363986  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:42.364008  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.364019  110500 round_trippers.go:580]     Audit-Id: 28a92848-4048-4824-9116-41fca3477677
	I0114 10:32:42.364031  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.364042  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.364055  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.364067  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.364078  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.364250  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:42.364584  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:42.859846  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:42.859875  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.859883  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.859890  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.862342  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:42.862362  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.862370  110500 round_trippers.go:580]     Audit-Id: 78bdb26b-a666-4f1d-893c-57b64da2bd73
	I0114 10:32:42.862375  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.862381  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.862386  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.862392  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.862397  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.862507  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:42.862938  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:42.862949  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.862956  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.862963  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.864801  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:42.864822  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.864831  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.864840  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.864849  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.864861  110500 round_trippers.go:580]     Audit-Id: 158eefbb-c63f-4fc0-ae90-e23ca6843f48
	I0114 10:32:42.864877  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.864887  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.864999  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:43.359803  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:43.359828  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.359837  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.359843  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.361988  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:43.362007  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.362014  110500 round_trippers.go:580]     Audit-Id: f05e6b2e-be63-42aa-bf95-7b57c23f420d
	I0114 10:32:43.362020  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.362025  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.362034  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.362039  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.362047  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.362144  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:43.362591  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:43.362603  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.362611  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.362618  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.364316  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:43.364337  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.364347  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.364356  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.364369  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.364378  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.364389  110500 round_trippers.go:580]     Audit-Id: 47c3825f-c4f2-4038-b9c9-ee937c9f14c3
	I0114 10:32:43.364401  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.364527  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:43.859053  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:43.859076  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.859084  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.859090  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.861229  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:43.861248  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.861255  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.861261  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.861272  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.861284  110500 round_trippers.go:580]     Audit-Id: fbebfdc6-d1bc-48b9-b0ed-49434b5c9ab0
	I0114 10:32:43.861294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.861301  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.861450  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:43.861923  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:43.861935  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.861943  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.861949  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.863768  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:43.863788  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.863796  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.863802  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.863807  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.863812  110500 round_trippers.go:580]     Audit-Id: 03d3495e-a450-42d8-ab63-885b6e3ff6e9
	I0114 10:32:43.863821  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.863829  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.863951  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:44.359472  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:44.359499  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.359512  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.359518  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.362100  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:44.362134  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.362144  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.362151  110500 round_trippers.go:580]     Audit-Id: 2ecb38ea-7962-4ce4-9634-c8d41bc36023
	I0114 10:32:44.362156  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.362162  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.362171  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.362184  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.362385  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:44.363138  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:44.363160  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.363172  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.363223  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.365040  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:44.365059  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.365066  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.365071  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.365078  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.365090  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.365102  110500 round_trippers.go:580]     Audit-Id: 504ff71e-4c40-4919-915c-04e52d16b2f0
	I0114 10:32:44.365114  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.365237  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:44.365660  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:44.859846  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:44.859869  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.859880  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.859886  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.862148  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:44.862174  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.862184  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.862193  110500 round_trippers.go:580]     Audit-Id: 1aef2bd1-d91e-4c55-b201-3d8fcb31bef9
	I0114 10:32:44.862202  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.862211  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.862218  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.862227  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.862358  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:44.862889  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:44.862901  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.862911  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.862917  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.864803  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:44.864819  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.864826  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.864831  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.864837  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.864844  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.864853  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.864861  110500 round_trippers.go:580]     Audit-Id: 916976a1-8d50-4315-b550-483b9bc9608b
	I0114 10:32:44.865027  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:45.359632  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:45.359652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.359660  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.359677  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.361927  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.361956  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.361967  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.361976  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.361985  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.362039  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.362051  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.362058  110500 round_trippers.go:580]     Audit-Id: 46f9e573-a43c-4914-bc5c-824c57798d3a
	I0114 10:32:45.362172  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:45.362607  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:45.362618  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.362626  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.362633  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.364421  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:45.364440  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.364449  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.364457  110500 round_trippers.go:580]     Audit-Id: da79f983-cde3-4e4d-8aa4-48f23dd813de
	I0114 10:32:45.364464  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.364472  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.364481  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.364494  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.364597  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:45.859227  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:45.859266  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.859280  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.859290  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.861600  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.861624  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.861634  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.861643  110500 round_trippers.go:580]     Audit-Id: b1855bfd-9d92-4b0c-9fc4-20a77939c6d0
	I0114 10:32:45.861651  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.861659  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.861670  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.861682  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.861828  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:45.862329  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:45.862343  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.862350  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.862357  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.864381  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.864399  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.864406  110500 round_trippers.go:580]     Audit-Id: be0b664f-3541-48f9-af1d-471c790dcf54
	I0114 10:32:45.864412  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.864418  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.864426  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.864434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.864443  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.864564  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.359579  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:46.359602  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.359610  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.359617  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.361864  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:46.361890  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.361901  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.361911  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.361920  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.361932  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.361944  110500 round_trippers.go:580]     Audit-Id: 4b299044-0411-4eef-8466-4e0b7f3f27ab
	I0114 10:32:46.361955  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.362100  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:46.362546  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:46.362557  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.362564  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.362573  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.364372  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:46.364392  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.364401  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.364407  110500 round_trippers.go:580]     Audit-Id: d9cac187-6e34-4ad6-8287-02ab069b2549
	I0114 10:32:46.364417  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.364431  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.364441  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.364454  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.364577  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.859167  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:46.859196  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.859204  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.859211  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.861386  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:46.861412  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.861422  110500 round_trippers.go:580]     Audit-Id: e0167e38-5bf6-4576-abdd-910b23e13cc8
	I0114 10:32:46.861431  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.861438  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.861447  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.861462  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.861470  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.861571  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:46.862026  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:46.862037  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.862044  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.862050  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.863759  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:46.863775  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.863781  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.863787  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.863793  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.863801  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.863809  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.863817  110500 round_trippers.go:580]     Audit-Id: ca4fe996-62ad-474d-848c-ccade570ba3d
	I0114 10:32:46.863978  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.864330  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:47.359737  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:47.359759  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.359768  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.359774  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.362284  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:47.362308  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.362318  110500 round_trippers.go:580]     Audit-Id: 35b47af2-d322-448d-9b6f-19d6f47b8f05
	I0114 10:32:47.362327  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.362335  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.362344  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.362353  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.362366  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.362488  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:47.363062  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:47.363082  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.363092  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.363098  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.365099  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:47.365123  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.365133  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.365144  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.365153  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.365162  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.365170  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.365178  110500 round_trippers.go:580]     Audit-Id: 48f2bdf3-9d93-44a9-a63a-3148dd9812b7
	I0114 10:32:47.365278  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:47.859942  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:47.859965  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.859976  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.859984  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.862271  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:47.862297  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.862308  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.862317  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.862324  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.862329  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.862335  110500 round_trippers.go:580]     Audit-Id: 0634038b-38ea-4702-bfee-cb95338954a7
	I0114 10:32:47.862340  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.862439  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:47.862930  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:47.862946  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.862953  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.862960  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.864674  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:47.864737  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.864809  110500 round_trippers.go:580]     Audit-Id: e1d75173-0a5a-4010-a0d9-5c3a5b9e8a49
	I0114 10:32:47.864828  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.864834  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.864840  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.864848  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.864853  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.864975  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.359325  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:48.359347  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.359359  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.359366  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.361551  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:48.361576  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.361586  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.361595  110500 round_trippers.go:580]     Audit-Id: 8ceb8c5f-cbf9-479b-a2c0-9ce1d42b4db1
	I0114 10:32:48.361604  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.361614  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.361627  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.361641  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.361766  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:48.362247  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:48.362262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.362270  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.362276  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.364040  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:48.364062  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.364079  110500 round_trippers.go:580]     Audit-Id: 44d53443-bee2-482c-b5f3-8914e7fce187
	I0114 10:32:48.364091  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.364105  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.364113  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.364123  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.364132  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.364251  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.859955  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:48.859984  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.859998  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.860008  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.862410  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:48.862438  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.862453  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.862472  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.862479  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.862487  110500 round_trippers.go:580]     Audit-Id: 959b5886-f2da-4e70-bd52-6ce100746f2e
	I0114 10:32:48.862497  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.862509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.862637  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:48.863215  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:48.863232  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.863243  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.863253  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.864931  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:48.864949  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.864959  110500 round_trippers.go:580]     Audit-Id: 98987253-9d9e-433d-856c-fe638637ea02
	I0114 10:32:48.864969  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.864978  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.864987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.864997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.865009  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.865115  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.865421  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:49.359720  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:49.359745  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.359755  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.359766  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.361934  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:49.361955  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.361962  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.361968  110500 round_trippers.go:580]     Audit-Id: 8c97b06b-cb60-4069-83b5-4dee919ddacb
	I0114 10:32:49.361973  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.361979  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.361984  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.361994  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.362122  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:49.362693  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:49.362710  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.362721  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.362731  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.364448  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:49.364474  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.364484  110500 round_trippers.go:580]     Audit-Id: c3b4d31f-3c42-413f-a50e-7006e7195737
	I0114 10:32:49.364491  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.364497  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.364506  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.364511  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.364519  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.364634  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:49.859107  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:49.859143  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.859156  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.859166  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.861351  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:49.861376  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.861387  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.861396  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.861405  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.861413  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.861423  110500 round_trippers.go:580]     Audit-Id: acf80202-d32e-427f-9905-65e7900c3476
	I0114 10:32:49.861430  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.861524  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:49.862006  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:49.862021  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.862033  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.862045  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.863792  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:49.863808  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.863814  110500 round_trippers.go:580]     Audit-Id: 43d97ecd-c9a3-4755-ba76-b682bd120b9a
	I0114 10:32:49.863819  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.863824  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.863830  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.863835  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.863843  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.863918  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:50.359575  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:50.359599  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.359607  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.359614  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.361914  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:50.361936  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.361944  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.361949  110500 round_trippers.go:580]     Audit-Id: 4e315205-0240-45dd-a4d4-efa0c059b803
	I0114 10:32:50.361955  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.361960  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.361968  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.361974  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.362080  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:50.362668  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:50.362684  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.362695  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.362711  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.364414  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:50.364438  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.364449  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.364458  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.364476  110500 round_trippers.go:580]     Audit-Id: ba0c18f4-30aa-46b3-b9be-ce1e31c041f9
	I0114 10:32:50.364482  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.364488  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.364493  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.364600  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:50.859237  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:50.859262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.859271  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.859277  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.861712  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:50.861744  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.861756  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.861766  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.861776  110500 round_trippers.go:580]     Audit-Id: 7dc14a15-e343-4565-bb6d-eaa7202a8b3f
	I0114 10:32:50.861781  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.861787  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.861792  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.861963  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:50.862594  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:50.862612  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.862624  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.862634  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.864616  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:50.864643  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.864650  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.864656  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.864661  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.864666  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.864671  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.864676  110500 round_trippers.go:580]     Audit-Id: 777e0a1b-ac51-4e60-9cf8-245b5a0d6267
	I0114 10:32:50.864792  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:51.359157  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:51.359192  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.359201  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.359208  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.361393  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:51.361418  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.361428  110500 round_trippers.go:580]     Audit-Id: 727ea24c-1766-4936-b762-2c67365137af
	I0114 10:32:51.361436  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.361444  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.361457  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.361466  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.361476  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.361596  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:51.362079  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:51.362092  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.362102  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.362111  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.363956  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:51.363980  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.363990  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.363999  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.364012  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.364021  110500 round_trippers.go:580]     Audit-Id: 92c74176-28e0-41c0-ad03-3dd4ad01620d
	I0114 10:32:51.364033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.364041  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.364146  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:51.364451  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:51.859788  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:51.859810  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.859819  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.859826  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.862051  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:51.862083  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.862098  110500 round_trippers.go:580]     Audit-Id: 5f8c387a-66e7-4893-b486-51686e6f4c1b
	I0114 10:32:51.862108  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.862119  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.862134  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.862143  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.862156  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.862263  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:51.862710  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:51.862724  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.862745  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.862756  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.864464  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:51.864481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.864487  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.864493  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.864498  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.864503  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.864508  110500 round_trippers.go:580]     Audit-Id: fbbfa460-93b1-4017-b7db-12d7f5dd096a
	I0114 10:32:51.864515  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.864633  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:52.359184  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:52.359216  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.359224  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.359231  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.361569  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:52.361590  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.361596  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.361602  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.361608  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.361617  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.361624  110500 round_trippers.go:580]     Audit-Id: 46cd87ca-c92a-40e7-9abc-58fb1d1da845
	I0114 10:32:52.361633  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.361785  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:52.362272  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:52.362293  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.362300  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.362306  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.364201  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:52.364225  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.364240  110500 round_trippers.go:580]     Audit-Id: 3bfff669-7a3d-4f5a-ba7a-cb801f029ad5
	I0114 10:32:52.364246  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.364252  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.364257  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.364263  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.364268  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.364382  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:52.859017  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:52.859043  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.859053  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.859061  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.861311  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:52.861335  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.861346  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.861354  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.861362  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.861370  110500 round_trippers.go:580]     Audit-Id: 6fa55034-e1ef-4b07-869f-e699f2e6ad9b
	I0114 10:32:52.861379  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.861397  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.861538  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:52.862025  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:52.862039  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.862047  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.862054  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.863574  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:52.863592  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.863601  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.863611  110500 round_trippers.go:580]     Audit-Id: 551ef0e8-c97f-4844-9422-c5752cb489bd
	I0114 10:32:52.863619  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.863627  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.863636  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.863646  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.863779  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.359608  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:53.359632  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.359645  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.359652  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.361956  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:53.361979  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.361987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.361993  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.361998  110500 round_trippers.go:580]     Audit-Id: 060b65f4-4c9b-400d-923e-42224d7765d1
	I0114 10:32:53.362003  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.362008  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.362013  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.362104  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:53.362577  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.362593  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.362603  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.362610  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.364347  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.364377  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.364386  110500 round_trippers.go:580]     Audit-Id: 97269ed2-d9cf-4bae-ae56-b417a88fc922
	I0114 10:32:53.364392  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.364398  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.364419  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.364430  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.364435  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.364547  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.364848  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:53.859079  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:53.859100  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.859108  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.859114  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.861340  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:53.861359  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.861366  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.861371  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.861377  110500 round_trippers.go:580]     Audit-Id: 4dac6af2-31c0-4d5b-ad54-0b90bc13b7f1
	I0114 10:32:53.861382  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.861387  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.861392  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.861477  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6542 chars]
	I0114 10:32:53.861953  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.861968  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.861976  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.861982  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.863752  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.863776  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.863786  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.863795  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.863803  110500 round_trippers.go:580]     Audit-Id: 3d572e93-949f-499a-a293-4ddb3e2a2d6d
	I0114 10:32:53.863812  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.863821  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.863829  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.863946  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.864233  110500 pod_ready.go:92] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.864250  110500 pod_ready.go:81] duration metric: took 16.39921044s waiting for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.864258  110500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.864298  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:53.864306  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.864313  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.864318  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.865875  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.865896  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.865905  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.865916  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.865925  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.865938  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.865949  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.865961  110500 round_trippers.go:580]     Audit-Id: 82f791e6-7642-40ac-a8b0-fb679511ec02
	I0114 10:32:53.866052  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"777","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6035 chars]
	I0114 10:32:53.866410  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.866422  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.866429  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.866436  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.867777  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.867797  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.867807  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.867816  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.867825  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.867838  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.867848  110500 round_trippers.go:580]     Audit-Id: 058bebb4-b275-46c2-9a74-1b5ca44db29a
	I0114 10:32:53.867861  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.867985  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.868296  110500 pod_ready.go:92] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.868309  110500 pod_ready.go:81] duration metric: took 4.045313ms waiting for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.868324  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.868372  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102822
	I0114 10:32:53.868380  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.868387  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.868394  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.870207  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.870223  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.870234  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.870243  110500 round_trippers.go:580]     Audit-Id: 1d446373-a202-405c-b237-c6843904c253
	I0114 10:32:53.870261  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.870270  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.870283  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.870295  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.870409  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102822","namespace":"kube-system","uid":"c74a88c9-d603-4a80-a194-de75c8d0a3a5","resourceVersion":"770","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.mirror":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.seen":"2023-01-14T10:28:42.123458577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8421 chars]
	I0114 10:32:53.870795  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.870809  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.870819  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.870828  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.872366  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.872389  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.872399  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.872408  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.872424  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.872434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.872447  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.872460  110500 round_trippers.go:580]     Audit-Id: 472b3cbd-28d7-44ac-a15d-56c46a3c4908
	I0114 10:32:53.872539  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.872804  110500 pod_ready.go:92] pod "kube-apiserver-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.872817  110500 pod_ready.go:81] duration metric: took 4.480396ms waiting for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.872828  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.872870  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102822
	I0114 10:32:53.872880  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.872890  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.872900  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.874460  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.874482  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.874490  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.874497  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.874504  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.874510  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.874515  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.874520  110500 round_trippers.go:580]     Audit-Id: c397a327-3c31-4fec-abe6-15a2e07084e1
	I0114 10:32:53.874669  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102822","namespace":"kube-system","uid":"85c8a264-96f3-4fcf-affd-917b94bdd177","resourceVersion":"774","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.mirror":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.seen":"2023-01-14T10:28:42.123460297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7996 chars]
	I0114 10:32:53.875051  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.875063  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.875070  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.875077  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.876617  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.876697  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.876721  110500 round_trippers.go:580]     Audit-Id: fec4f21d-1840-4d29-b02a-62b90536fe0e
	I0114 10:32:53.876728  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.876733  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.876741  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.876747  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.876754  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.876851  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.877124  110500 pod_ready.go:92] pod "kube-controller-manager-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.877137  110500 pod_ready.go:81] duration metric: took 4.30113ms waiting for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.877149  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.877191  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d5n6
	I0114 10:32:53.877201  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.877219  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.877234  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.878711  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.878727  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.878734  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.878742  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.878750  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.878777  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.878783  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.878789  110500 round_trippers.go:580]     Audit-Id: 84194089-138c-4790-b159-7929e98278bb
	I0114 10:32:53.878874  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4d5n6","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dba561b-e827-4a6e-afd9-11c68b7e4447","resourceVersion":"471","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5522 chars]
	I0114 10:32:53.879244  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:32:53.879258  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.879265  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.879271  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.880734  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.880759  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.880769  110500 round_trippers.go:580]     Audit-Id: f14db8a2-ec1f-4484-a93c-8313811b037d
	I0114 10:32:53.880779  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.880788  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.880796  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.880802  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.880808  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.880892  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4","resourceVersion":"549","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1" [truncated 4430 chars]
	I0114 10:32:53.881144  110500 pod_ready.go:92] pod "kube-proxy-4d5n6" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.881161  110500 pod_ready.go:81] duration metric: took 4.002111ms waiting for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.881169  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.059627  110500 request.go:614] Waited for 178.374902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:54.059726  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:54.059741  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.059754  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.059768  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.061871  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.061896  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.061908  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.061916  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.061923  110500 round_trippers.go:580]     Audit-Id: aed13262-a010-46cc-af38-a5bd25ab0d48
	I0114 10:32:54.061933  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.061944  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.061957  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.062158  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bzd24","generateName":"kube-proxy-","namespace":"kube-system","uid":"3191f786-4823-486a-90e6-be1b1180c23a","resourceVersion":"660","creationTimestamp":"2023-01-14T10:30:20Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 10:32:54.259954  110500 request.go:614] Waited for 197.349508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:54.260021  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:54.260027  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.260035  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.260044  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.261934  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:54.261954  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.261961  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.261967  110500 round_trippers.go:580]     Audit-Id: 4baf5a74-b2c4-429b-8b3c-4634d55c2954
	I0114 10:32:54.261972  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.261977  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.261983  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.261991  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.262089  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m03","uid":"7fb9d125-0cce-4853-b9a9-9348e20e7ae7","resourceVersion":"674","creationTimestamp":"2023-01-14T10:31:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu
mes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{"." [truncated 4248 chars]
	I0114 10:32:54.262369  110500 pod_ready.go:92] pod "kube-proxy-bzd24" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:54.262382  110500 pod_ready.go:81] duration metric: took 381.20861ms waiting for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.262392  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.459841  110500 request.go:614] Waited for 197.376659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:54.459903  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:54.459909  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.459917  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.459930  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.462326  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.462351  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.462362  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.462371  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.462381  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.462394  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.462407  110500 round_trippers.go:580]     Audit-Id: d19a543b-1f00-4a60-a98d-7c9d97051362
	I0114 10:32:54.462420  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.462560  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlcll","generateName":"kube-proxy-","namespace":"kube-system","uid":"91e05737-5cbf-404c-8b7c-75045f584885","resourceVersion":"718","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5727 chars]
	I0114 10:32:54.659432  110500 request.go:614] Waited for 196.351122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:54.659482  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:54.659487  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.659495  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.659501  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.661545  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.661569  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.661580  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.661589  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.661598  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.661610  110500 round_trippers.go:580]     Audit-Id: e4c1d17c-ce25-4fe7-a6f5-b07ee5fc48ab
	I0114 10:32:54.661624  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.661635  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.661721  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:54.662011  110500 pod_ready.go:92] pod "kube-proxy-qlcll" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:54.662023  110500 pod_ready.go:81] duration metric: took 399.620307ms waiting for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.662034  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.859494  110500 request.go:614] Waited for 197.400011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:54.859552  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:54.859557  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.859564  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.859571  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.861821  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.861852  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.861865  110500 round_trippers.go:580]     Audit-Id: ccc6f823-3989-4b53-8ff0-d1337b0b0a61
	I0114 10:32:54.861874  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.861886  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.861899  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.861909  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.861919  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.862042  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102822","namespace":"kube-system","uid":"63ee442e-88de-44d9-8512-98c56f1b4942","resourceVersion":"725","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.mirror":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.seen":"2023-01-14T10:28:42.123461701Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4878 chars]
	I0114 10:32:55.059823  110500 request.go:614] Waited for 197.354082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:55.059872  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:55.059876  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.059884  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.059897  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.062004  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:55.062027  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.062038  110500 round_trippers.go:580]     Audit-Id: d564f563-4343-426b-a870-97784639d546
	I0114 10:32:55.062046  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.062056  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.062064  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.062073  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.062086  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.062203  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:55.062558  110500 pod_ready.go:92] pod "kube-scheduler-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:55.062577  110500 pod_ready.go:81] duration metric: took 400.532751ms waiting for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:55.062591  110500 pod_ready.go:38] duration metric: took 17.803879464s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:55.062613  110500 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:32:55.062662  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:55.072334  110500 command_runner.go:130] > 1113
	I0114 10:32:55.072394  110500 api_server.go:71] duration metric: took 18.002517039s to wait for apiserver process to appear ...
	I0114 10:32:55.072408  110500 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:32:55.072418  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:55.077405  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:32:55.077450  110500 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0114 10:32:55.077454  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.077462  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.077468  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.078157  110500 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0114 10:32:55.078182  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.078189  110500 round_trippers.go:580]     Audit-Id: adb19759-4de9-46d5-95cf-8b480d9bd7f5
	I0114 10:32:55.078195  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.078200  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.078207  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.078213  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.078219  110500 round_trippers.go:580]     Content-Length: 263
	I0114 10:32:55.078224  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.078239  110500 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 10:32:55.078279  110500 api_server.go:140] control plane version: v1.25.3
	I0114 10:32:55.078291  110500 api_server.go:130] duration metric: took 5.878523ms to wait for apiserver health ...
	I0114 10:32:55.078298  110500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:32:55.259707  110500 request.go:614] Waited for 181.324482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.259776  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.259781  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.259791  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.259800  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.263248  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:55.263272  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.263280  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.263286  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.263292  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.263297  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.263303  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.263310  110500 round_trippers.go:580]     Audit-Id: 8a596230-8294-46c4-b65b-17375baa5d42
	I0114 10:32:55.263957  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84909 chars]
	I0114 10:32:55.266933  110500 system_pods.go:59] 12 kube-system pods found
	I0114 10:32:55.266958  110500 system_pods.go:61] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:55.266964  110500 system_pods.go:61] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running
	I0114 10:32:55.266968  110500 system_pods.go:61] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:55.266972  110500 system_pods.go:61] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:55.266976  110500 system_pods.go:61] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running
	I0114 10:32:55.266980  110500 system_pods.go:61] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:55.266986  110500 system_pods.go:61] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running
	I0114 10:32:55.266993  110500 system_pods.go:61] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:55.266998  110500 system_pods.go:61] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:55.267003  110500 system_pods.go:61] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:55.267007  110500 system_pods.go:61] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:55.267014  110500 system_pods.go:61] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:32:55.267026  110500 system_pods.go:74] duration metric: took 188.723685ms to wait for pod list to return data ...
	I0114 10:32:55.267033  110500 default_sa.go:34] waiting for default service account to be created ...
	I0114 10:32:55.459429  110500 request.go:614] Waited for 192.340757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0114 10:32:55.459508  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0114 10:32:55.459520  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.459532  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.459547  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.461495  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:55.461513  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.461520  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.461526  110500 round_trippers.go:580]     Audit-Id: 29aeece9-a829-4503-b39f-8d5844636b92
	I0114 10:32:55.461531  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.461536  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.461541  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.461547  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.461552  110500 round_trippers.go:580]     Content-Length: 261
	I0114 10:32:55.461569  110500 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ed5470f9-ec28-44cb-ac49-0dbdbeab7993","resourceVersion":"329","creationTimestamp":"2023-01-14T10:28:55Z"}}]}
	I0114 10:32:55.461729  110500 default_sa.go:45] found service account: "default"
	I0114 10:32:55.461745  110500 default_sa.go:55] duration metric: took 194.706664ms for default service account to be created ...
	I0114 10:32:55.461752  110500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 10:32:55.659076  110500 request.go:614] Waited for 197.269144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.659133  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.659137  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.659145  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.659152  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.662730  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:55.662755  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.662770  110500 round_trippers.go:580]     Audit-Id: d073e5a7-e138-4f76-b486-e769fbc5f5e6
	I0114 10:32:55.662778  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.662786  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.662794  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.662804  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.662817  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.663478  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84909 chars]
	I0114 10:32:55.666072  110500 system_pods.go:86] 12 kube-system pods found
	I0114 10:32:55.666095  110500 system_pods.go:89] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:55.666101  110500 system_pods.go:89] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running
	I0114 10:32:55.666106  110500 system_pods.go:89] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:55.666111  110500 system_pods.go:89] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:55.666116  110500 system_pods.go:89] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running
	I0114 10:32:55.666123  110500 system_pods.go:89] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:55.666132  110500 system_pods.go:89] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running
	I0114 10:32:55.666138  110500 system_pods.go:89] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:55.666145  110500 system_pods.go:89] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:55.666149  110500 system_pods.go:89] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:55.666153  110500 system_pods.go:89] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:55.666163  110500 system_pods.go:89] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:32:55.666171  110500 system_pods.go:126] duration metric: took 204.414948ms to wait for k8s-apps to be running ...
	I0114 10:32:55.666181  110500 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 10:32:55.666219  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:32:55.675950  110500 system_svc.go:56] duration metric: took 9.754834ms WaitForService to wait for kubelet.
	I0114 10:32:55.675980  110500 kubeadm.go:573] duration metric: took 18.606132423s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 10:32:55.675999  110500 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:32:55.859434  110500 request.go:614] Waited for 183.362713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:55.859502  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:55.859514  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.859522  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.859528  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.862021  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:55.862052  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.862064  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.862073  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.862082  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.862095  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.862111  110500 round_trippers.go:580]     Audit-Id: 6dd48a4e-f50e-492a-8dff-09e669805baa
	I0114 10:32:55.862119  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.862314  110500 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15962 chars]
	I0114 10:32:55.862905  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862918  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862930  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862936  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862941  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862945  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862952  110500 node_conditions.go:105] duration metric: took 186.947551ms to run NodePressure ...
	I0114 10:32:55.862961  110500 start.go:217] waiting for startup goroutines ...
	I0114 10:32:55.863404  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:55.863497  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:55.866941  110500 out.go:177] * Starting worker node multinode-102822-m02 in cluster multinode-102822
	I0114 10:32:55.868288  110500 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:32:55.869765  110500 out.go:177] * Pulling base image ...
	I0114 10:32:55.871152  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:32:55.871175  110500 cache.go:57] Caching tarball of preloaded images
	I0114 10:32:55.871229  110500 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:32:55.871304  110500 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:32:55.871330  110500 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:32:55.871441  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:55.893561  110500 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:32:55.893584  110500 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:32:55.893609  110500 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:32:55.893646  110500 start.go:364] acquiring machines lock for multinode-102822-m02: {Name:mk25af419661492cbd58b718b64b51677c98136a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:32:55.893781  110500 start.go:368] acquired machines lock for "multinode-102822-m02" in 104.709µs
	I0114 10:32:55.893802  110500 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:32:55.893807  110500 fix.go:55] fixHost starting: m02
	I0114 10:32:55.894020  110500 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:32:55.917751  110500 fix.go:103] recreateIfNeeded on multinode-102822-m02: state=Stopped err=<nil>
	W0114 10:32:55.917777  110500 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:32:55.920259  110500 out.go:177] * Restarting existing docker container for "multinode-102822-m02" ...
	I0114 10:32:55.921900  110500 cli_runner.go:164] Run: docker start multinode-102822-m02
	I0114 10:32:56.303574  110500 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:32:56.328701  110500 kic.go:426] container "multinode-102822-m02" state is running.
	I0114 10:32:56.329001  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:32:56.353818  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:56.354054  110500 machine.go:88] provisioning docker machine ...
	I0114 10:32:56.354080  110500 ubuntu.go:169] provisioning hostname "multinode-102822-m02"
	I0114 10:32:56.354126  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:56.378925  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:56.379088  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32872 <nil> <nil>}
	I0114 10:32:56.379107  110500 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102822-m02 && echo "multinode-102822-m02" | sudo tee /etc/hostname
	I0114 10:32:56.379767  110500 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47252->127.0.0.1:32872: read: connection reset by peer
	I0114 10:32:59.504292  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102822-m02
	
	I0114 10:32:59.504372  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.529104  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:59.529255  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32872 <nil> <nil>}
	I0114 10:32:59.529273  110500 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102822-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102822-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102822-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:32:59.643446  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:32:59.643478  110500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:32:59.643495  110500 ubuntu.go:177] setting up certificates
	I0114 10:32:59.643503  110500 provision.go:83] configureAuth start
	I0114 10:32:59.643550  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:32:59.666898  110500 provision.go:138] copyHostCerts
	I0114 10:32:59.666931  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:59.666953  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:32:59.666961  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:59.667021  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:32:59.667087  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:59.667104  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:32:59.667109  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:59.667132  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:32:59.667170  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:59.667183  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:32:59.667189  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:59.667207  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:32:59.667255  110500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.multinode-102822-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-102822-m02]
	I0114 10:32:59.772545  110500 provision.go:172] copyRemoteCerts
	I0114 10:32:59.772598  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:32:59.772629  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.795246  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:32:59.879398  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 10:32:59.879459  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:32:59.896300  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 10:32:59.896363  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0114 10:32:59.913524  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 10:32:59.913588  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:32:59.930401  110500 provision.go:86] duration metric: configureAuth took 286.883524ms
	I0114 10:32:59.930432  110500 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:32:59.930616  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:59.930627  110500 machine.go:91] provisioned docker machine in 3.576558371s
	I0114 10:32:59.930634  110500 start.go:300] post-start starting for "multinode-102822-m02" (driver="docker")
	I0114 10:32:59.930640  110500 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:32:59.930681  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:32:59.930713  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.954609  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.039058  110500 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:33:00.041623  110500 command_runner.go:130] > NAME="Ubuntu"
	I0114 10:33:00.041638  110500 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 10:33:00.041643  110500 command_runner.go:130] > ID=ubuntu
	I0114 10:33:00.041652  110500 command_runner.go:130] > ID_LIKE=debian
	I0114 10:33:00.041659  110500 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 10:33:00.041669  110500 command_runner.go:130] > VERSION_ID="20.04"
	I0114 10:33:00.041686  110500 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 10:33:00.041695  110500 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 10:33:00.041702  110500 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 10:33:00.041715  110500 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 10:33:00.041722  110500 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 10:33:00.041727  110500 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 10:33:00.041809  110500 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:33:00.041826  110500 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:33:00.041837  110500 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:33:00.041848  110500 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:33:00.041863  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:33:00.041918  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:33:00.042001  110500 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:33:00.042015  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /etc/ssl/certs/103062.pem
	I0114 10:33:00.042098  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:33:00.048428  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:33:00.065256  110500 start.go:303] post-start completed in 134.608719ms
	I0114 10:33:00.065333  110500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:33:00.065370  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.089347  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.171707  110500 command_runner.go:130] > 18%!
	(MISSING)I0114 10:33:00.171953  110500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:33:00.175843  110500 command_runner.go:130] > 239G
	I0114 10:33:00.175869  110500 fix.go:57] fixHost completed within 4.282059343s
	I0114 10:33:00.175880  110500 start.go:83] releasing machines lock for "multinode-102822-m02", held for 4.282085064s
	I0114 10:33:00.175958  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:33:00.204131  110500 out.go:177] * Found network options:
	I0114 10:33:00.205771  110500 out.go:177]   - NO_PROXY=192.168.58.2
	W0114 10:33:00.207135  110500 proxy.go:119] fail to check proxy env: Error ip not in block
	W0114 10:33:00.207169  110500 proxy.go:119] fail to check proxy env: Error ip not in block
	I0114 10:33:00.207243  110500 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:33:00.207283  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.207345  110500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:33:00.207411  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.233561  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.237305  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.349674  110500 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 10:33:00.349754  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:33:00.358943  110500 docker.go:189] disabling docker service ...
	I0114 10:33:00.358987  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:33:00.368539  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:33:00.377330  110500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:33:00.458087  110500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:33:00.532879  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:33:00.542016  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:33:00.553731  110500 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:33:00.553758  110500 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:33:00.554499  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:33:00.562521  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:33:00.570518  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:33:00.578409  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:33:00.586533  110500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:33:00.593065  110500 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0114 10:33:00.593116  110500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:33:00.599333  110500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:33:00.669768  110500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:33:00.743175  110500 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:33:00.743240  110500 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:33:00.746690  110500 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0114 10:33:00.746715  110500 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 10:33:00.746721  110500 command_runner.go:130] > Device: fch/252d	Inode: 118         Links: 1
	I0114 10:33:00.746728  110500 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:33:00.746734  110500 command_runner.go:130] > Access: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746739  110500 command_runner.go:130] > Modify: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746744  110500 command_runner.go:130] > Change: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746750  110500 command_runner.go:130] >  Birth: -
	I0114 10:33:00.746772  110500 start.go:472] Will wait 60s for crictl version
	I0114 10:33:00.746812  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:33:00.749846  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:33:00.749902  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:33:00.774327  110500 command_runner.go:130] ! time="2023-01-14T10:33:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:33:00.774386  110500 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:33:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:33:15.181491  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:33:15.203527  110500 command_runner.go:130] > Version:  0.1.0
	I0114 10:33:15.203557  110500 command_runner.go:130] > RuntimeName:  containerd
	I0114 10:33:15.203564  110500 command_runner.go:130] > RuntimeVersion:  1.6.10
	I0114 10:33:15.203572  110500 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0114 10:33:15.203593  110500 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:33:15.203645  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:33:15.225426  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:33:15.226779  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:33:15.248561  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:33:15.252201  110500 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:33:15.253862  110500 out.go:177]   - env NO_PROXY=192.168.58.2
	I0114 10:33:15.255234  110500 cli_runner.go:164] Run: docker network inspect multinode-102822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:33:15.278552  110500 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0114 10:33:15.281742  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:33:15.290843  110500 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822 for IP: 192.168.58.3
	I0114 10:33:15.290938  110500 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:33:15.290983  110500 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:33:15.291001  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 10:33:15.291018  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 10:33:15.291034  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:33:15.291044  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:33:15.291086  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:33:15.291122  110500 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:33:15.291137  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:33:15.291172  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:33:15.291200  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:33:15.291232  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:33:15.291294  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:33:15.291328  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem -> /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.291340  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.291350  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.291733  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:33:15.308950  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:33:15.326682  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:33:15.343586  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:33:15.360810  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:33:15.378267  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:33:15.394623  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:33:15.411806  110500 ssh_runner.go:195] Run: openssl version
	I0114 10:33:15.416453  110500 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 10:33:15.416527  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:33:15.423431  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426436  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426476  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426513  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.431093  110500 command_runner.go:130] > 51391683
	I0114 10:33:15.431280  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:33:15.438119  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:33:15.445156  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448124  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448172  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448210  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.452776  110500 command_runner.go:130] > 3ec20f2e
	I0114 10:33:15.452816  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:33:15.459313  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:33:15.466112  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.468925  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.469032  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.469077  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.473645  110500 command_runner.go:130] > b5213941
	I0114 10:33:15.473797  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:33:15.480540  110500 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:33:15.504106  110500 command_runner.go:130] > {
	I0114 10:33:15.504131  110500 command_runner.go:130] >   "status": {
	I0114 10:33:15.504139  110500 command_runner.go:130] >     "conditions": [
	I0114 10:33:15.504148  110500 command_runner.go:130] >       {
	I0114 10:33:15.504158  110500 command_runner.go:130] >         "type": "RuntimeReady",
	I0114 10:33:15.504166  110500 command_runner.go:130] >         "status": true,
	I0114 10:33:15.504173  110500 command_runner.go:130] >         "reason": "",
	I0114 10:33:15.504181  110500 command_runner.go:130] >         "message": ""
	I0114 10:33:15.504191  110500 command_runner.go:130] >       },
	I0114 10:33:15.504197  110500 command_runner.go:130] >       {
	I0114 10:33:15.504207  110500 command_runner.go:130] >         "type": "NetworkReady",
	I0114 10:33:15.504216  110500 command_runner.go:130] >         "status": true,
	I0114 10:33:15.504226  110500 command_runner.go:130] >         "reason": "",
	I0114 10:33:15.504245  110500 command_runner.go:130] >         "message": ""
	I0114 10:33:15.504252  110500 command_runner.go:130] >       }
	I0114 10:33:15.504255  110500 command_runner.go:130] >     ]
	I0114 10:33:15.504259  110500 command_runner.go:130] >   },
	I0114 10:33:15.504263  110500 command_runner.go:130] >   "cniconfig": {
	I0114 10:33:15.504267  110500 command_runner.go:130] >     "PluginDirs": [
	I0114 10:33:15.504272  110500 command_runner.go:130] >       "/opt/cni/bin"
	I0114 10:33:15.504276  110500 command_runner.go:130] >     ],
	I0114 10:33:15.504281  110500 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.mk",
	I0114 10:33:15.504286  110500 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0114 10:33:15.504290  110500 command_runner.go:130] >     "Prefix": "eth",
	I0114 10:33:15.504295  110500 command_runner.go:130] >     "Networks": [
	I0114 10:33:15.504299  110500 command_runner.go:130] >       {
	I0114 10:33:15.504306  110500 command_runner.go:130] >         "Config": {
	I0114 10:33:15.504311  110500 command_runner.go:130] >           "Name": "cni-loopback",
	I0114 10:33:15.504319  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:33:15.504325  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:33:15.504329  110500 command_runner.go:130] >             {
	I0114 10:33:15.504334  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504341  110500 command_runner.go:130] >                 "type": "loopback",
	I0114 10:33:15.504345  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:33:15.504352  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504356  110500 command_runner.go:130] >               },
	I0114 10:33:15.504361  110500 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0114 10:33:15.504367  110500 command_runner.go:130] >             }
	I0114 10:33:15.504370  110500 command_runner.go:130] >           ],
	I0114 10:33:15.504382  110500 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0114 10:33:15.504386  110500 command_runner.go:130] >         },
	I0114 10:33:15.504391  110500 command_runner.go:130] >         "IFName": "lo"
	I0114 10:33:15.504397  110500 command_runner.go:130] >       },
	I0114 10:33:15.504400  110500 command_runner.go:130] >       {
	I0114 10:33:15.504406  110500 command_runner.go:130] >         "Config": {
	I0114 10:33:15.504413  110500 command_runner.go:130] >           "Name": "kindnet",
	I0114 10:33:15.504419  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:33:15.504424  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:33:15.504431  110500 command_runner.go:130] >             {
	I0114 10:33:15.504435  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504442  110500 command_runner.go:130] >                 "type": "ptp",
	I0114 10:33:15.504446  110500 command_runner.go:130] >                 "ipam": {
	I0114 10:33:15.504452  110500 command_runner.go:130] >                   "type": "host-local"
	I0114 10:33:15.504456  110500 command_runner.go:130] >                 },
	I0114 10:33:15.504460  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504466  110500 command_runner.go:130] >               },
	I0114 10:33:15.504480  110500 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.1.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0114 10:33:15.504488  110500 command_runner.go:130] >             },
	I0114 10:33:15.504492  110500 command_runner.go:130] >             {
	I0114 10:33:15.504499  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504503  110500 command_runner.go:130] >                 "type": "portmap",
	I0114 10:33:15.504510  110500 command_runner.go:130] >                 "capabilities": {
	I0114 10:33:15.504515  110500 command_runner.go:130] >                   "portMappings": true
	I0114 10:33:15.504521  110500 command_runner.go:130] >                 },
	I0114 10:33:15.504527  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:33:15.504535  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504540  110500 command_runner.go:130] >               },
	I0114 10:33:15.504550  110500 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0114 10:33:15.504554  110500 command_runner.go:130] >             }
	I0114 10:33:15.504561  110500 command_runner.go:130] >           ],
	I0114 10:33:15.504591  110500 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.1.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0114 10:33:15.504601  110500 command_runner.go:130] >         },
	I0114 10:33:15.504605  110500 command_runner.go:130] >         "IFName": "eth0"
	I0114 10:33:15.504609  110500 command_runner.go:130] >       }
	I0114 10:33:15.504612  110500 command_runner.go:130] >     ]
	I0114 10:33:15.504618  110500 command_runner.go:130] >   },
	I0114 10:33:15.504622  110500 command_runner.go:130] >   "config": {
	I0114 10:33:15.504626  110500 command_runner.go:130] >     "containerd": {
	I0114 10:33:15.504631  110500 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0114 10:33:15.504637  110500 command_runner.go:130] >       "defaultRuntimeName": "default",
	I0114 10:33:15.504641  110500 command_runner.go:130] >       "defaultRuntime": {
	I0114 10:33:15.504649  110500 command_runner.go:130] >         "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504654  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:33:15.504658  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:33:15.504665  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:33:15.504670  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:33:15.504674  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:33:15.504681  110500 command_runner.go:130] >         "options": null,
	I0114 10:33:15.504688  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:33:15.504697  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:33:15.504704  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:33:15.504715  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:33:15.504723  110500 command_runner.go:130] >       },
	I0114 10:33:15.504730  110500 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0114 10:33:15.504740  110500 command_runner.go:130] >         "runtimeType": "",
	I0114 10:33:15.504751  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:33:15.504757  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:33:15.504766  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:33:15.504772  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:33:15.504781  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:33:15.504788  110500 command_runner.go:130] >         "options": null,
	I0114 10:33:15.504800  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:33:15.504810  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:33:15.504818  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:33:15.504823  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:33:15.504829  110500 command_runner.go:130] >       },
	I0114 10:33:15.504833  110500 command_runner.go:130] >       "runtimes": {
	I0114 10:33:15.504839  110500 command_runner.go:130] >         "default": {
	I0114 10:33:15.504844  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504850  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:33:15.504854  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:33:15.504859  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:33:15.504865  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:33:15.504869  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:33:15.504876  110500 command_runner.go:130] >           "options": null,
	I0114 10:33:15.504884  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:33:15.504891  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:33:15.504896  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:33:15.504903  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:33:15.504907  110500 command_runner.go:130] >         },
	I0114 10:33:15.504914  110500 command_runner.go:130] >         "runc": {
	I0114 10:33:15.504919  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504926  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:33:15.504930  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:33:15.504937  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:33:15.504942  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:33:15.504949  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:33:15.504953  110500 command_runner.go:130] >           "options": {
	I0114 10:33:15.504968  110500 command_runner.go:130] >             "SystemdCgroup": false
	I0114 10:33:15.504974  110500 command_runner.go:130] >           },
	I0114 10:33:15.504980  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:33:15.504985  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:33:15.504991  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:33:15.504997  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:33:15.505001  110500 command_runner.go:130] >         }
	I0114 10:33:15.505005  110500 command_runner.go:130] >       },
	I0114 10:33:15.505011  110500 command_runner.go:130] >       "noPivot": false,
	I0114 10:33:15.505016  110500 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0114 10:33:15.505024  110500 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0114 10:33:15.505029  110500 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0114 10:33:15.505035  110500 command_runner.go:130] >     },
	I0114 10:33:15.505039  110500 command_runner.go:130] >     "cni": {
	I0114 10:33:15.505047  110500 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0114 10:33:15.505053  110500 command_runner.go:130] >       "confDir": "/etc/cni/net.mk",
	I0114 10:33:15.505057  110500 command_runner.go:130] >       "maxConfNum": 1,
	I0114 10:33:15.505065  110500 command_runner.go:130] >       "confTemplate": "",
	I0114 10:33:15.505070  110500 command_runner.go:130] >       "ipPref": ""
	I0114 10:33:15.505077  110500 command_runner.go:130] >     },
	I0114 10:33:15.505081  110500 command_runner.go:130] >     "registry": {
	I0114 10:33:15.505088  110500 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0114 10:33:15.505092  110500 command_runner.go:130] >       "mirrors": null,
	I0114 10:33:15.505099  110500 command_runner.go:130] >       "configs": null,
	I0114 10:33:15.505103  110500 command_runner.go:130] >       "auths": null,
	I0114 10:33:15.505109  110500 command_runner.go:130] >       "headers": null
	I0114 10:33:15.505114  110500 command_runner.go:130] >     },
	I0114 10:33:15.505120  110500 command_runner.go:130] >     "imageDecryption": {
	I0114 10:33:15.505124  110500 command_runner.go:130] >       "keyModel": "node"
	I0114 10:33:15.505130  110500 command_runner.go:130] >     },
	I0114 10:33:15.505134  110500 command_runner.go:130] >     "disableTCPService": true,
	I0114 10:33:15.505141  110500 command_runner.go:130] >     "streamServerAddress": "",
	I0114 10:33:15.505145  110500 command_runner.go:130] >     "streamServerPort": "10010",
	I0114 10:33:15.505150  110500 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0114 10:33:15.505154  110500 command_runner.go:130] >     "enableSelinux": false,
	I0114 10:33:15.505159  110500 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0114 10:33:15.505165  110500 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.8",
	I0114 10:33:15.505169  110500 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0114 10:33:15.505176  110500 command_runner.go:130] >     "systemdCgroup": false,
	I0114 10:33:15.505180  110500 command_runner.go:130] >     "enableTLSStreaming": false,
	I0114 10:33:15.505187  110500 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0114 10:33:15.505192  110500 command_runner.go:130] >       "tlsCertFile": "",
	I0114 10:33:15.505198  110500 command_runner.go:130] >       "tlsKeyFile": ""
	I0114 10:33:15.505202  110500 command_runner.go:130] >     },
	I0114 10:33:15.505207  110500 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0114 10:33:15.505214  110500 command_runner.go:130] >     "disableCgroup": false,
	I0114 10:33:15.505218  110500 command_runner.go:130] >     "disableApparmor": false,
	I0114 10:33:15.505228  110500 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0114 10:33:15.505238  110500 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0114 10:33:15.505243  110500 command_runner.go:130] >     "disableProcMount": false,
	I0114 10:33:15.505247  110500 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0114 10:33:15.505252  110500 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0114 10:33:15.505257  110500 command_runner.go:130] >     "disableHugetlbController": true,
	I0114 10:33:15.505266  110500 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0114 10:33:15.505271  110500 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0114 10:33:15.505278  110500 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0114 10:33:15.505283  110500 command_runner.go:130] >     "enableUnprivilegedPorts": false,
	I0114 10:33:15.505292  110500 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0114 10:33:15.505298  110500 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0114 10:33:15.505306  110500 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0114 10:33:15.505312  110500 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0114 10:33:15.505320  110500 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0114 10:33:15.505324  110500 command_runner.go:130] >   },
	I0114 10:33:15.505330  110500 command_runner.go:130] >   "golang": "go1.18.8",
	I0114 10:33:15.505335  110500 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0114 10:33:15.505342  110500 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0114 10:33:15.505345  110500 command_runner.go:130] > }
	I0114 10:33:15.505500  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:33:15.505509  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:33:15.505519  110500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:33:15.505532  110500 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102822 NodeName:multinode-102822-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:33:15.505648  110500 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-102822-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:33:15.505707  110500 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-102822-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:33:15.505750  110500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:33:15.512450  110500 command_runner.go:130] > kubeadm
	I0114 10:33:15.512472  110500 command_runner.go:130] > kubectl
	I0114 10:33:15.512480  110500 command_runner.go:130] > kubelet
	I0114 10:33:15.513054  110500 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:33:15.513107  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0114 10:33:15.519785  110500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0114 10:33:15.534253  110500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:33:15.546554  110500 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:33:15.549454  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:33:15.558528  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:33:15.558803  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:33:15.558758  110500 start.go:286] JoinCluster: &{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:33:15.558854  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0114 10:33:15.558894  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:33:15.583347  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:33:15.717142  110500 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 
	I0114 10:33:15.717199  110500 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:15.717241  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:33:15.717479  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102822-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0114 10:33:15.717511  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:33:15.742483  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:33:15.884475  110500 command_runner.go:130] > node/multinode-102822-m02 cordoned
	I0114 10:33:17.906182  110500 command_runner.go:130] > pod/busybox-65db55d5d6-jth2v deleted
	I0114 10:33:17.906203  110500 command_runner.go:130] > node/multinode-102822-m02 drained
	I0114 10:33:17.939112  110500 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0114 10:33:17.939144  110500 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bwgvn, kube-system/kube-proxy-4d5n6
	I0114 10:33:17.939174  110500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102822-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.221667006s)
	I0114 10:33:17.939190  110500 node.go:109] successfully drained node "m02"
	I0114 10:33:17.939610  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:33:17.939958  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:33:17.940363  110500 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0114 10:33:17.940424  110500 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:33:17.940434  110500 round_trippers.go:469] Request Headers:
	I0114 10:33:17.940446  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:33:17.940458  110500 round_trippers.go:473]     Content-Type: application/json
	I0114 10:33:17.940467  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:33:17.944032  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:33:17.944056  110500 round_trippers.go:577] Response Headers:
	I0114 10:33:17.944066  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:33:17 GMT
	I0114 10:33:17.944074  110500 round_trippers.go:580]     Audit-Id: 1447952b-a9b6-4bad-af8b-4518cb0f651f
	I0114 10:33:17.944082  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:33:17.944092  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:33:17.944099  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:33:17.944106  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:33:17.944118  110500 round_trippers.go:580]     Content-Length: 171
	I0114 10:33:17.944147  110500 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-102822-m02","kind":"nodes","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4"}}
	I0114 10:33:17.944184  110500 node.go:125] successfully deleted node "m02"
	I0114 10:33:17.944197  110500 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:17.944219  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:17.944239  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:17.981302  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:18.010540  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:18.010570  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:18.010578  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:18.010587  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:18.010596  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:18.010604  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:18.010620  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:18.010632  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:18.010643  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:18.010656  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:18.010668  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:18.010680  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:18.089584  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:18.089615  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 10:33:18.108662  110500 command_runner.go:130] ! W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:18.108687  110500 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 10:33:18.108704  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:18.108710  110500 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 10:33:18.108718  110500 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 10:33:18.108729  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:18.108739  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 10:33:18.108785  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:18.108798  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:18.108809  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:18.140909  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:18.141227  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:18.141249  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:18.141257  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:18.233452  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:18.688329  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:18.688363  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:18.688374  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:18.689316  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:18.689334  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:18.689341  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:18.689347  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:18.689354  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:18.689370  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:18.689382  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:18.691216  110500 command_runner.go:130] ! W0114 10:33:18.140748     931 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:18.691240  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:18.691258  110500 retry.go:31] will retry after 11.645600532s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.339744  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:30.339825  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:30.371164  110500 command_runner.go:130] ! W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:30.391839  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:30.474705  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:30.474732  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.476832  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:30.476856  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:30.476865  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:30.476871  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:30.476879  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:30.476888  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:30.476897  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:30.476907  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:30.476923  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:30.476933  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:30.476947  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:30.476959  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:30.476971  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:30.476983  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:30.476997  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 10:33:30.477078  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.477095  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:30.477113  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:30.507252  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:30.507323  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:30.507351  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:30.507364  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:30.510729  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:30.527843  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:30.527883  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:30.527895  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:30.527906  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:30.527919  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:30.527933  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:30.527944  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:30.527952  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:30.527963  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:30.527971  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:30.529841  110500 command_runner.go:130] ! W0114 10:33:30.506899    1326 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:30.529869  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:30.529889  110500 retry.go:31] will retry after 14.065712808s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.596274  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:44.596340  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:44.627227  110500 command_runner.go:130] ! W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:44.647728  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:44.732356  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:44.732389  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.734274  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:44.734294  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:44.734301  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:44.734305  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:44.734310  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:44.734316  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:44.734323  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:44.734328  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:44.734333  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:44.734338  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:44.734347  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:44.734352  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:44.734357  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:44.734362  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:44.734372  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 10:33:44.734418  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.734433  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:44.734444  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:44.764574  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:44.764594  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:44.764606  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:44.764759  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:44.768331  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:44.785135  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:44.785167  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:44.785178  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:44.785189  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:44.785200  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:44.785211  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:44.785217  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:44.785224  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:44.785239  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:44.785247  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:44.786933  110500 command_runner.go:130] ! W0114 10:33:44.764260    1389 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:44.786961  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:44.786980  110500 retry.go:31] will retry after 20.804343684s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:34:05.591739  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:34:05.591806  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:34:05.623871  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:34:05.645267  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:34:05.645298  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:34:05.645308  110500 command_runner.go:130] > OS: Linux
	I0114 10:34:05.645316  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:34:05.645324  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:34:05.645332  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:34:05.645344  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:34:05.645357  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:34:05.645370  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:34:05.645390  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:34:05.645402  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:34:05.645416  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:34:05.714434  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:34:05.714463  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 10:34:05.738856  110500 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:34:05.738889  110500 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:34:05.738898  110500 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 10:34:05.816843  110500 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0114 10:34:11.333257  110500 command_runner.go:130] > This node has joined the cluster:
	I0114 10:34:11.333282  110500 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0114 10:34:11.333289  110500 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0114 10:34:11.333295  110500 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0114 10:34:11.335498  110500 command_runner.go:130] ! W0114 10:34:05.623434    1411 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:34:11.335523  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:34:11.335543  110500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": (5.743723595s)
	I0114 10:34:11.335568  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0114 10:34:11.487045  110500 start.go:288] JoinCluster complete in 55.928280874s
	I0114 10:34:11.487079  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:34:11.487087  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:34:11.487145  110500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:34:11.490436  110500 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 10:34:11.490463  110500 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 10:34:11.490478  110500 command_runner.go:130] > Device: 34h/52d	Inode: 565966      Links: 1
	I0114 10:34:11.490489  110500 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:34:11.490502  110500 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:34:11.490512  110500 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:34:11.490523  110500 command_runner.go:130] > Change: 2023-01-14 10:06:59.488187836 +0000
	I0114 10:34:11.490531  110500 command_runner.go:130] >  Birth: -
	I0114 10:34:11.490577  110500 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:34:11.490589  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:34:11.503251  110500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:34:11.654512  110500 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:34:11.656123  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:34:11.657892  110500 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 10:34:11.665488  110500 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 10:34:11.669420  110500 start.go:212] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:34:11.671634  110500 out.go:177] * Verifying Kubernetes components...
	I0114 10:34:11.673414  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:34:11.682991  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:34:11.683197  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:34:11.683407  110500 node_ready.go:35] waiting up to 6m0s for node "multinode-102822-m02" to be "Ready" ...
	I0114 10:34:11.683462  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:34:11.683469  110500 round_trippers.go:469] Request Headers:
	I0114 10:34:11.683476  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:34:11.683486  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:34:11.685497  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:34:11.685521  110500 round_trippers.go:577] Response Headers:
	I0114 10:34:11.685532  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:34:11.685540  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:34:11.685548  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:34:11.685558  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:34:11.685571  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:34:11 GMT
	I0114 10:34:11.685584  110500 round_trippers.go:580]     Audit-Id: f79a2115-26cd-46bc-8cab-079a2a0ca5bf
	I0114 10:34:11.685686  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"f1608f12-9a61-41d8-b38b-2fa2b878a3bb","resourceVersion":"915","creationTimestamp":"2023-01-14T10:33:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:33:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kube-controller-manager","operation":"Update"," [truncated 4761 chars]
	I0114 10:34:11.686002  110500 node_ready.go:53] node "multinode-102822-m02" has status "Ready":"Unknown"
	I0114 10:34:11.686020  110500 node_ready.go:38] duration metric: took 2.599177ms waiting for node "multinode-102822-m02" to be "Ready" ...
	I0114 10:34:11.687814  110500 out.go:177] 
	W0114 10:34:11.689189  110500 out.go:239] X Exiting due to GUEST_START: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: node "multinode-102822-m02" has status "Ready":"Unknown"
	W0114 10:34:11.689225  110500 out.go:239] * 
	W0114 10:34:11.690030  110500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:34:11.691589  110500 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	ff74c8ad1795d       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   2d7e3cdf99356
	5dd8ac1a35968       beaaf00edd38a       About a minute ago   Running             kube-proxy                1                   872946e88e635
	778e081cfbf69       8c811b4aec35f       About a minute ago   Running             busybox                   1                   99cb0cb52bd50
	6e3feef784cf5       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   d78446cc3352d
	15ee53e6aba5d       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   2d7e3cdf99356
	a2bee209de6eb       5185b96f0becf       About a minute ago   Running             coredns                   1                   21b5c4e68a1af
	b38b962e79724       a8a176a5d5d69       About a minute ago   Running             etcd                      1                   e3e44fec0d38d
	9f38caa8f201e       0346dbd74bcb9       About a minute ago   Running             kube-apiserver            1                   a005864bc203c
	2d6d8d7cecf8f       6039992312758       About a minute ago   Running             kube-controller-manager   1                   370d556f0b87e
	fcb2330166522       6d23ec0e8b87e       About a minute ago   Running             kube-scheduler            1                   055c0d220f8c0
	76169c252da32       8c811b4aec35f       4 minutes ago        Exited              busybox                   0                   e7e7b3fb5d878
	dd932b8cd4a02       5185b96f0becf       5 minutes ago        Exited              coredns                   0                   a5e06867ee15a
	fa1fbfcc6ff2a       d6e3e26021b60       5 minutes ago        Exited              kindnet-cni               0                   92b2f3fda0a5f
	6391dbda6c818       beaaf00edd38a       5 minutes ago        Exited              kube-proxy                0                   aec55d29d9fbb
	1cb572f06fea4       6039992312758       5 minutes ago        Exited              kube-controller-manager   0                   1543db59f0d6a
	9a1ebe17670ca       0346dbd74bcb9       5 minutes ago        Exited              kube-apiserver            0                   f17729d9ba6b0
	7263297512701       6d23ec0e8b87e       5 minutes ago        Exited              kube-scheduler            0                   6c251bcc8b6a8
	1485b440fe92c       a8a176a5d5d69       5 minutes ago        Exited              etcd                      0                   044bafd02d44a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2023-01-14 10:31:57 UTC, end at Sat 2023-01-14 10:34:13 UTC. --
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.041737219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.041752786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.042146572Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0 pid=1649 runtime=io.containerd.runc.v2
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.151127866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlcll,Uid:91e05737-5cbf-404c-8b7c-75045f584885,Namespace:kube-system,Attempt:1,} returns sandbox id \"872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0\""
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.155190898Z" level=info msg="CreateContainer within sandbox \"872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.241053055Z" level=info msg="CreateContainer within sandbox \"872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4\""
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.242016429Z" level=info msg="StartContainer for \"5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4\""
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.384609845Z" level=info msg="StartContainer for \"5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4\" returns successfully"
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.630496668Z" level=info msg="shim disconnected" id=15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.630560718Z" level=warning msg="cleaning up after shim disconnected" id=15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2 namespace=k8s.io
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.630572509Z" level=info msg="cleaning up dead shim"
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.638898216Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:32:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1966 runtime=io.containerd.runc.v2\n"
	Jan 14 10:32:53 multinode-102822 containerd[386]: time="2023-01-14T10:32:53.459305637Z" level=info msg="RemoveContainer for \"8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f\""
	Jan 14 10:32:53 multinode-102822 containerd[386]: time="2023-01-14T10:32:53.464363070Z" level=info msg="RemoveContainer for \"8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f\" returns successfully"
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.278195346Z" level=info msg="CreateContainer within sandbox \"2d7e3cdf993560349ea2320d065ed8ccc5feaa8a7e03948691e6a54662fc2a78\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.304562736Z" level=info msg="CreateContainer within sandbox \"2d7e3cdf993560349ea2320d065ed8ccc5feaa8a7e03948691e6a54662fc2a78\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406\""
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.305168429Z" level=info msg="StartContainer for \"ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406\""
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.377037261Z" level=info msg="StartContainer for \"ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406\" returns successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172204300Z" level=info msg="StopPodSandbox for \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\""
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172310091Z" level=info msg="TearDown network for sandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172342365Z" level=info msg="StopPodSandbox for \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" returns successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172766315Z" level=info msg="RemovePodSandbox for \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\""
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172806908Z" level=info msg="Forcibly stopping sandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\""
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172883998Z" level=info msg="TearDown network for sandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.177614484Z" level=info msg="RemovePodSandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" returns successfully"
	
	* 
	* ==> coredns [a2bee209de6eb085860890c15e08d7f808c0f0609cb757aa0869e3c82e6984f4] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 74073c0c68a507b50ca81d319bd4852e1242323807443dc549ab9f2fb21c8587977d5d9a7ecbfada54b5ff45c9b40d98fc730bfb6641b1b669d8fa8e6e9cea7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 74073c0c68a507b50ca81d319bd4852e1242323807443dc549ab9f2fb21c8587977d5d9a7ecbfada54b5ff45c9b40d98fc730bfb6641b1b669d8fa8e6e9cea7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-102822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=multinode-102822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_28_43_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-102822
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:34:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-102822
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                d8105840-a25f-4140-bd3c-c1b0fe6228a7
	  Boot ID:                    3b63a5ae-0a73-415b-af74-fb930cc7c08b
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-2hdwz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 coredns-565d847f94-f5dzh                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m18s
	  kube-system                 etcd-multinode-102822                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m31s
	  kube-system                 kindnet-zm4vf                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m19s
	  kube-system                 kube-apiserver-multinode-102822             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-controller-manager-multinode-102822    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-proxy-qlcll                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-multinode-102822             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m38s (x4 over 5m38s)  kubelet          Node multinode-102822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x4 over 5m38s)  kubelet          Node multinode-102822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x4 over 5m38s)  kubelet          Node multinode-102822 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     5m31s                  kubelet          Node multinode-102822 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m31s                  kubelet          Node multinode-102822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s                  kubelet          Node multinode-102822 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m31s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m19s                  node-controller  Node multinode-102822 event: Registered Node multinode-102822 in Controller
	  Normal  NodeReady                5m11s                  kubelet          Node multinode-102822 status is now: NodeReady
	  Normal  Starting                 116s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node multinode-102822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node multinode-102822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node multinode-102822 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                   node-controller  Node multinode-102822 event: Registered Node multinode-102822 in Controller
	
	
	Name:               multinode-102822-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102822-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:33:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-102822-m02" not found
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Jan 2023 10:33:17 +0000   Sat, 14 Jan 2023 10:33:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Jan 2023 10:33:17 +0000   Sat, 14 Jan 2023 10:33:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Jan 2023 10:33:17 +0000   Sat, 14 Jan 2023 10:33:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Jan 2023 10:33:17 +0000   Sat, 14 Jan 2023 10:33:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-102822-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                82c44291-a31f-43c5-972f-5aaaac290f21
	  Boot ID:                    3b63a5ae-0a73-415b-af74-fb930cc7c08b
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-tch5p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kindnet-bwgvn               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m46s
	  kube-system                 kube-proxy-4d5n6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m58s)  kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m58s)  kubelet          Node multinode-102822-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 76s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  76s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    75s (x4 over 76s)      kubelet          Node multinode-102822-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x4 over 76s)      kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  74s (x5 over 76s)      kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             15s                    node-controller  Node multinode-102822-m02 status is now: NodeNotReady
	
	
	Name:               multinode-102822-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102822-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:31:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-102822-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:31:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Jan 2023 10:31:12 +0000   Sat, 14 Jan 2023 10:33:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Jan 2023 10:31:12 +0000   Sat, 14 Jan 2023 10:33:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Jan 2023 10:31:12 +0000   Sat, 14 Jan 2023 10:33:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Jan 2023 10:31:12 +0000   Sat, 14 Jan 2023 10:33:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-102822-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                323f0c8b-8167-4a84-a6eb-3b266f406879
	  Boot ID:                    3b63a5ae-0a73-415b-af74-fb930cc7c08b
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fb2ng       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m53s
	  kube-system                 kube-proxy-bzd24    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 4m6s)   kubelet          Node multinode-102822-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 4m6s)   kubelet          Node multinode-102822-m03 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m12s)  kubelet          Node multinode-102822-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m12s)  kubelet          Node multinode-102822-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m12s)  kubelet          Node multinode-102822-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m1s                   kubelet          Node multinode-102822-m03 status is now: NodeReady
	  Normal  RegisteredNode           100s                   node-controller  Node multinode-102822-m03 event: Registered Node multinode-102822-m03 in Controller
	  Normal  NodeNotReady             60s                    node-controller  Node multinode-102822-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +4.654706] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=0000000092f5ea2a{9p.inode} n=00000000eb6e0172
	[  +0.007355] FS-Cache: O-key=[8] '87a00f0200000000'
	[  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006684] FS-Cache: N-cookie d=0000000092f5ea2a{9p.inode} n=00000000e1d5d334
	[  +0.008751] FS-Cache: N-key=[8] '87a00f0200000000'
	[  +0.343217] FS-Cache: Duplicate cookie detected
	[  +0.004671] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=0000000092f5ea2a{9p.inode} n=00000000d68b2e5d
	[  +0.007346] FS-Cache: O-key=[8] '91a00f0200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008041] FS-Cache: N-cookie d=0000000092f5ea2a{9p.inode} n=00000000e2277570
	[  +0.008761] FS-Cache: N-key=[8] '91a00f0200000000'
	[Jan14 10:21] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan14 10:32] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000006] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +1.007675] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000005] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +2.011857] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000006] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +4.031727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000030] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +8.195348] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000007] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	
	* 
	* ==> etcd [1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2] <==
	* {"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-102822 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:28:37.248Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:28:37.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"warn","ts":"2023-01-14T10:29:11.089Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.668881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2023-01-14T10:29:11.089Z","caller":"traceutil/trace.go:171","msg":"trace[1315289163] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:416; }","duration":"183.911052ms","start":"2023-01-14T10:29:10.905Z","end":"2023-01-14T10:29:11.089Z","steps":["trace[1315289163] 'agreement among raft nodes before linearized reading'  (duration: 58.269131ms)","trace[1315289163] 'range keys from in-memory index tree'  (duration: 125.354624ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-14T10:30:00.966Z","caller":"traceutil/trace.go:171","msg":"trace[654694052] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"115.519919ms","start":"2023-01-14T10:30:00.851Z","end":"2023-01-14T10:30:00.966Z","steps":["trace[654694052] 'process raft request'  (duration: 54.574474ms)","trace[654694052] 'compare'  (duration: 60.838552ms)"],"step_count":2}
	
	* 
	* ==> etcd [b38b962e79724d6f1c63d4dc3d78a13f8cb401220c7a537a8db80bdf0b792460] <==
	* {"level":"info","ts":"2023-01-14T10:32:18.224Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T10:32:18.224Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:32:18.227Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-102822 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:32:19.155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:32:19.155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:32:19.156Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:32:19.156Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  10:34:13 up  1:16,  0 users,  load average: 0.73, 0.81, 0.76
	Linux multinode-102822 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22] <==
	* I0114 10:28:39.505632       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:28:39.520109       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:28:39.520322       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:28:39.520469       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:28:39.520616       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:28:39.520692       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:28:39.531912       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:28:40.207596       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:28:40.409513       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0114 10:28:40.412238       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0114 10:28:40.412261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:28:40.717534       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:28:40.744085       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:28:40.850271       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0114 10:28:40.857124       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0114 10:28:40.857936       1 controller.go:616] quota admission added evaluator for: endpoints
	I0114 10:28:40.861414       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 10:28:41.450119       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:28:42.049560       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:28:42.055584       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0114 10:28:42.062475       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:28:42.134960       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:28:54.905735       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0114 10:28:54.905735       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0114 10:28:55.106481       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [9f38caa8f201e108a7701a4a1d039d002613a957829d78fb98ff79df81ed0c18] <==
	* I0114 10:32:21.015075       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:32:21.021493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:32:21.030461       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0114 10:32:21.030484       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0114 10:32:21.030513       1 apf_controller.go:300] Starting API Priority and Fairness config controller
	I0114 10:32:21.030738       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:32:21.031056       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0114 10:32:21.042543       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0114 10:32:21.044284       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:32:21.120718       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:32:21.121958       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:32:21.122448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:32:21.122475       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:32:21.122486       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:32:21.130545       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:32:21.130620       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:32:21.805971       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:32:22.017251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:32:23.570545       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:32:23.668198       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:32:23.677305       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:32:23.726354       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:32:23.730823       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:32:33.329600       1 controller.go:616] quota admission added evaluator for: endpoints
	I0114 10:32:33.416103       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028] <==
	* I0114 10:28:55.350784       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-k8hm7"
	I0114 10:29:04.205370       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0114 10:29:27.954465       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m02" does not exist
	I0114 10:29:27.958710       1 range_allocator.go:367] Set node multinode-102822-m02 PodCIDR to [10.244.1.0/24]
	I0114 10:29:27.962127       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4d5n6"
	I0114 10:29:27.968576       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bwgvn"
	W0114 10:29:29.208844       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m02. Assuming now as a timestamp.
	I0114 10:29:29.208925       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m02 event: Registered Node multinode-102822-m02 in Controller"
	W0114 10:29:48.543111       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	I0114 10:29:50.951973       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-65db55d5d6 to 2"
	I0114 10:29:50.957409       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-jth2v"
	I0114 10:29:50.961037       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-2hdwz"
	W0114 10:30:20.094854       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:30:20.094902       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m03" does not exist
	I0114 10:30:20.106934       1 range_allocator.go:367] Set node multinode-102822-m03 PodCIDR to [10.244.2.0/24]
	I0114 10:30:20.107450       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bzd24"
	I0114 10:30:20.107474       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fb2ng"
	W0114 10:30:24.222079       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m03. Assuming now as a timestamp.
	I0114 10:30:24.222142       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m03 event: Registered Node multinode-102822-m03 in Controller"
	W0114 10:30:26.502257       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:31:01.364313       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:31:02.150607       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:31:02.150838       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m03" does not exist
	I0114 10:31:02.154611       1 range_allocator.go:367] Set node multinode-102822-m03 PodCIDR to [10.244.3.0/24]
	W0114 10:31:12.292377       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	
	* 
	* ==> kube-controller-manager [2d6d8d7cecf8f4cf0dc5ea9dc307b93e538956656bc46cdb02f8f51ec9f02109] <==
	* W0114 10:32:33.502829       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822. Assuming now as a timestamp.
	W0114 10:32:33.502877       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m02. Assuming now as a timestamp.
	W0114 10:32:33.502910       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m03. Assuming now as a timestamp.
	I0114 10:32:33.502934       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0114 10:32:33.502976       1 event.go:294] "Event occurred" object="multinode-102822" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822 event: Registered Node multinode-102822 in Controller"
	I0114 10:32:33.502997       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m02 event: Registered Node multinode-102822-m02 in Controller"
	I0114 10:32:33.503009       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m03 event: Registered Node multinode-102822-m03 in Controller"
	I0114 10:32:33.510031       1 shared_informer.go:262] Caches are synced for daemon sets
	I0114 10:32:33.525304       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:32:33.843127       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:32:33.873339       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:32:33.873365       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:33:13.514508       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102822-m03 status is now: NodeNotReady"
	W0114 10:33:13.514511       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	I0114 10:33:13.520444       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-bzd24" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.526157       1 event.go:294] "Event occurred" object="kube-system/kindnet-fb2ng" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.530639       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102822-m02 status is now: NodeNotReady"
	I0114 10:33:13.535978       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-4d5n6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.540620       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-jth2v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.546008       1 event.go:294] "Event occurred" object="kube-system/kindnet-bwgvn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:15.906483       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-tch5p"
	W0114 10:33:17.990591       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m02" does not exist
	W0114 10:33:17.990638       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	I0114 10:33:17.996925       1 range_allocator.go:367] Set node multinode-102822-m02 PodCIDR to [10.244.1.0/24]
	I0114 10:33:58.556555       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102822-m02 status is now: NodeNotReady"
	
	* 
	* ==> kube-proxy [5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4] <==
	* I0114 10:32:23.450353       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0114 10:32:23.450451       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0114 10:32:23.450519       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:32:23.470509       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:32:23.470546       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:32:23.470555       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:32:23.470567       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:32:23.470588       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:32:23.470746       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:32:23.470991       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:32:23.471008       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:32:23.471559       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:32:23.471589       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:32:23.471598       1 config.go:444] "Starting node config controller"
	I0114 10:32:23.471608       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:32:23.471637       1 config.go:317] "Starting service config controller"
	I0114 10:32:23.471648       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:32:23.571802       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:32:23.571834       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:32:23.571834       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2] <==
	* I0114 10:28:55.524702       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0114 10:28:55.524799       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0114 10:28:55.524828       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:28:55.548489       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:28:55.548526       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:28:55.548536       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:28:55.548548       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:28:55.548568       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:28:55.548740       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:28:55.548989       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:28:55.549002       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:28:55.549468       1 config.go:317] "Starting service config controller"
	I0114 10:28:55.549493       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:28:55.549528       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:28:55.549539       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:28:55.549530       1 config.go:444] "Starting node config controller"
	I0114 10:28:55.549556       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:28:55.649631       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:28:55.649672       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:28:55.649701       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf] <==
	* W0114 10:28:39.527575       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0114 10:28:39.527621       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0114 10:28:39.527575       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:39.527663       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:39.527666       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 10:28:39.527716       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 10:28:39.527773       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:39.527803       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:39.527825       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:39.527808       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:39.527939       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:39.527980       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:40.392746       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:40.392785       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 10:28:40.488188       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0114 10:28:40.488231       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0114 10:28:40.492144       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0114 10:28:40.492170       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:40.578843       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:40.578871       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:40.585826       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0114 10:28:40.585854       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:28:40.609956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:40.609985       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0114 10:28:43.525198       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fcb2330166522f9111ac906a8126009b1eb9533cf1305abd799e3399b7e38d65] <==
	* I0114 10:32:18.828989       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:32:21.039797       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:32:21.039833       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0114 10:32:21.039855       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:32:21.039865       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:32:21.127447       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:32:21.127476       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:32:21.128802       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:32:21.128832       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:32:21.128904       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:32:21.128937       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:32:21.229818       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:31:57 UTC, end at Sat 2023-01-14 10:34:14 UTC. --
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242328     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e664ab33-93db-45e5-a147-81c14dd05837-cni-cfg\") pod \"kindnet-zm4vf\" (UID: \"e664ab33-93db-45e5-a147-81c14dd05837\") " pod="kube-system/kindnet-zm4vf"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242347     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e664ab33-93db-45e5-a147-81c14dd05837-lib-modules\") pod \"kindnet-zm4vf\" (UID: \"e664ab33-93db-45e5-a147-81c14dd05837\") " pod="kube-system/kindnet-zm4vf"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242430     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91e05737-5cbf-404c-8b7c-75045f584885-kube-proxy\") pod \"kube-proxy-qlcll\" (UID: \"91e05737-5cbf-404c-8b7c-75045f584885\") " pod="kube-system/kube-proxy-qlcll"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242475     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb-config-volume\") pod \"coredns-565d847f94-f5dzh\" (UID: \"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb\") " pod="kube-system/coredns-565d847f94-f5dzh"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242529     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ae50847f-5144-4e4b-a340-5cbd0bbb55a2-tmp\") pod \"storage-provisioner\" (UID: \"ae50847f-5144-4e4b-a340-5cbd0bbb55a2\") " pod="kube-system/storage-provisioner"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242561     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e05737-5cbf-404c-8b7c-75045f584885-lib-modules\") pod \"kube-proxy-qlcll\" (UID: \"91e05737-5cbf-404c-8b7c-75045f584885\") " pod="kube-system/kube-proxy-qlcll"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242601     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6v9k\" (UniqueName: \"kubernetes.io/projected/7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb-kube-api-access-v6v9k\") pod \"coredns-565d847f94-f5dzh\" (UID: \"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb\") " pod="kube-system/coredns-565d847f94-f5dzh"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242625     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e664ab33-93db-45e5-a147-81c14dd05837-xtables-lock\") pod \"kindnet-zm4vf\" (UID: \"e664ab33-93db-45e5-a147-81c14dd05837\") " pod="kube-system/kindnet-zm4vf"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242671     767 reconciler.go:169] "Reconciler: start to sync state"
	Jan 14 10:32:22 multinode-102822 kubelet[767]: I0114 10:32:22.364716     767 request.go:682] Waited for 1.020652354s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Jan 14 10:32:23 multinode-102822 kubelet[767]: I0114 10:32:23.360444     767 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 14 10:32:27 multinode-102822 kubelet[767]: E0114 10:32:27.342361     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:27 multinode-102822 kubelet[767]: E0114 10:32:27.342409     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:32:37 multinode-102822 kubelet[767]: E0114 10:32:37.360656     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:37 multinode-102822 kubelet[767]: E0114 10:32:37.360716     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:32:47 multinode-102822 kubelet[767]: E0114 10:32:47.379514     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:47 multinode-102822 kubelet[767]: E0114 10:32:47.379579     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:32:53 multinode-102822 kubelet[767]: I0114 10:32:53.457979     767 scope.go:115] "RemoveContainer" containerID="8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	Jan 14 10:32:53 multinode-102822 kubelet[767]: I0114 10:32:53.458307     767 scope.go:115] "RemoveContainer" containerID="15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2"
	Jan 14 10:32:53 multinode-102822 kubelet[767]: E0114 10:32:53.458566     767 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae50847f-5144-4e4b-a340-5cbd0bbb55a2)\"" pod="kube-system/storage-provisioner" podUID=ae50847f-5144-4e4b-a340-5cbd0bbb55a2
	Jan 14 10:32:57 multinode-102822 kubelet[767]: E0114 10:32:57.394440     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:57 multinode-102822 kubelet[767]: E0114 10:32:57.394487     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:33:04 multinode-102822 kubelet[767]: I0114 10:33:04.275899     767 scope.go:115] "RemoveContainer" containerID="15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2"
	Jan 14 10:33:07 multinode-102822 kubelet[767]: E0114 10:33:07.412188     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:33:07 multinode-102822 kubelet[767]: E0114 10:33:07.412232     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	
	* 
	* ==> storage-provisioner [15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2] <==
	* I0114 10:32:22.605792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0114 10:32:52.608038       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406] <==
	* I0114 10:33:04.384355       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:33:04.390694       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:33:04.390738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:33:21.786224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:33:21.786294       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00795c43-9e4e-4929-bf49-b21bb407b065", APIVersion:"v1", ResourceVersion:"872", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-102822_20747e10-56a4-4e23-b77f-fb34ebcbf6d9 became leader
	I0114 10:33:21.786344       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-102822_20747e10-56a4-4e23-b77f-fb34ebcbf6d9!
	I0114 10:33:21.886858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-102822_20747e10-56a4-4e23-b77f-fb34ebcbf6d9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-102822 -n multinode-102822
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-102822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-tch5p
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-102822 describe pod busybox-65db55d5d6-tch5p
helpers_test.go:280: (dbg) kubectl --context multinode-102822 describe pod busybox-65db55d5d6-tch5p:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-tch5p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-102822-m02/
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jkptf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-jkptf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  59s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  57s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Normal   Scheduled         55s   default-scheduler  Successfully assigned default/busybox-65db55d5d6-tch5p to multinode-102822-m02

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (180.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-102822 node delete m03: (3.106035508s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:427: expected 2 nodes to be Ready, got 
-- stdout --
	NAME                   STATUS     ROLES           AGE     VERSION
	multinode-102822       Ready      control-plane   5m39s   v1.25.3
	multinode-102822-m02   NotReady   <none>          61s     v1.25.3

                                                
                                                
-- /stdout --
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:435: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	 Unknown
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-102822
helpers_test.go:235: (dbg) docker inspect multinode-102822:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd",
	        "Created": "2023-01-14T10:28:29.623082209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 110804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:31:56.902992649Z",
	            "FinishedAt": "2023-01-14T10:31:34.915062825Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/hosts",
	        "LogPath": "/var/lib/docker/containers/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd-json.log",
	        "Name": "/multinode-102822",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-102822:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-102822",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3-init/diff:/var/lib/docker/overlay2/cfa67474dfffbd23c875ed1363951467d9d88e2b76451e5643f2505208741f3b/diff:/var/lib/docker/overlay2/073ec06077c9f139927a68d24e4f683141baf9acf954f7927a62d439b8e24069/diff:/var/lib/docker/overlay2/100e369464b40a65b67d4855b5a41f41832f93605f574ff35657d9b2d0ee5b4f/diff:/var/lib/docker/overlay2/e2f9a50fd4c46aeeaf52dd5d2c45c5548e516eaa4949cae4e8f8be3dda02e560/diff:/var/lib/docker/overlay2/6d3b34d6067ad9d3ff171a32fea0902c6748df9aeb5a46e12971cdc70934e200/diff:/var/lib/docker/overlay2/44f244a49f3260ebade676a0e6177935228bcd4504617609ee4343aa284e724c/diff:/var/lib/docker/overlay2/1cba83561484d9f781c67421553c95b75266d2217256379d5787e510ac28483f/diff:/var/lib/docker/overlay2/9ec5ab0f595877fa3d60d26e7aa243026d8b45fea861a3e12c469d81ab1ffe6c/diff:/var/lib/docker/overlay2/30d22319caaa0760daf22d54c95076cad3b970afb61aa7c018ac37b623117613/diff:/var/lib/docker/overlay2/1f5756
3ce3807317a405416fbe25b96e16e33708f4f97020c4f82e1e2b4da5ed/diff:/var/lib/docker/overlay2/604bdff9bf4c8bdcc970ae4f7e8734a5aa27c04fb328f61dea00c3740f12daba/diff:/var/lib/docker/overlay2/03f7c27604538c82d3d43dfde85aa33dc8f2658b93b51f500b27edd3b1aaed98/diff:/var/lib/docker/overlay2/f9ceccc940eb08b69d102c744810d1aff5795c7e9a58c20d43ca6857fa21b8ea/diff:/var/lib/docker/overlay2/576f7412e6f61feeea74cdfbae850007513e8aa407ce5e45f903c70ce2f89fe5/diff:/var/lib/docker/overlay2/958517a359371ca3276a50323466f96ec3d5d7687cb2f26c287a9a343fcbcd20/diff:/var/lib/docker/overlay2/c09247966342dd284c940bcd881b6187476a63e53055e9f378aaa25ceaa86263/diff:/var/lib/docker/overlay2/85bda0ea7bf5a8c05a6eb175b445c71a710e3e392fc1b70957e3902cec94586f/diff:/var/lib/docker/overlay2/7cde8ffb6999e9d99ff44b83daaf1a781dd6546a7a96eda5b901e88658c78f74/diff:/var/lib/docker/overlay2/92d42128dacdf015e3ce466b8e365093147199e2fffcda0192857efed322565f/diff:/var/lib/docker/overlay2/0f2dff826ddc5a3be056ecb8791656438fd8d9122e0bfa4bf808ff640ddd0366/diff:/var/lib/d
ocker/overlay2/44a9089aeee67c883a076dc1940e80698f487176c3d197f321518402ce7a4467/diff:/var/lib/docker/overlay2/6068fe71ba149c31fa6947b978b0755f11f334f9d40e14b5c9946cf9a103ca68/diff:/var/lib/docker/overlay2/adb5ed5619948c4b7e4d83048cd96cc3d6ded2ae453b67da2e120f4ada989e97/diff:/var/lib/docker/overlay2/d633ebbd9eed2900d2e31406be983b7d21e70ac3c07593de38c5cfb0628275ae/diff:/var/lib/docker/overlay2/87f4a27d0733b1bdf23169c5079f854d115bfd926c76a346d28259b8f2abc0f9/diff:/var/lib/docker/overlay2/4b514ac9d0ce1d6bff4ec77673304888b5a45fca7d9a52d872475d70a4bad242/diff:/var/lib/docker/overlay2/76f964a17c8531bd97500c5bf3aa0b003b317ad1c055c0d1c475d41666734b75/diff:/var/lib/docker/overlay2/0a0f3b972da362a17d673ffdcd0d42b3663faeed5e799b2b38868036d5cd1533/diff:/var/lib/docker/overlay2/a07c41d799979e1f64f7bf3d0bcd9a98b724ebea06eafa1a01b83c71c76f9d3c/diff:/var/lib/docker/overlay2/0be1fd774bf851dd17c525a17f8a015aa3c0f1f71b29033666a62cd2be3a495f/diff:/var/lib/docker/overlay2/62db7acc5b1cb93b6e26eb5c826b67cebb252c079fd5a060ba843227c91
c864f/diff:/var/lib/docker/overlay2/076dea682ce5421a9c145f8038044bf438f06c3635406efdf60ef350f109389f/diff:/var/lib/docker/overlay2/143de4d69dc548610d4e281cfb14bf70d7ed81172bee212fc15755591dea37b4/diff:/var/lib/docker/overlay2/89ecf87d7b563ffa220047c3bb13c7ea55ebb215cbd3d2731d795ce559d5b9b4/diff:/var/lib/docker/overlay2/e9f8c0a087f0832425535d00100392d8b267181825a52ae7291fb7fe7ab62614/diff:/var/lib/docker/overlay2/66fb715c26be36afdfe15f9e2562f7320c04421f7bff30da6424afc0395d1f19/diff:/var/lib/docker/overlay2/24d5a6709af6741b4216757263798c2fd2ffbe83a81f68619cd00e2107b4ff3d/diff:/var/lib/docker/overlay2/865a5915817b4d31f71061a418fcc1c284ee124c9b3a275c3676cb2b3fba32dd/diff:/var/lib/docker/overlay2/b33545ce05c040395c79c17ae2fc9b23755b589f9f6e2f94121abe1cc5c2869c/diff:/var/lib/docker/overlay2/22f66646b2dde6f03ac24f5affc8a43db7aaae6b2e9677ae4cf9e607238761e4/diff:/var/lib/docker/overlay2/789c281f8e044ab343c9800dc7431b8fbaf616ecd3419979e8a3dfbb605f8efe/diff:/var/lib/docker/overlay2/6dd50d303cdaa1e2fa047ed92b16580d8b0c2c
77552b9a13e0c356884add5310/diff:/var/lib/docker/overlay2/b1d8d5816bce1b48db468539e1bc343a7c87dee89fb1783174081611a7e0b2ee/diff:/var/lib/docker/overlay2/529b543dd76f6ad1b33944f7c0767adca9befb5d162c4c1bf13756f3c0048fb4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3720b1650fe627a6a97acd67447597e8b3f1fcba272561475553b7ed43faae3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-102822",
	                "Source": "/var/lib/docker/volumes/multinode-102822/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-102822",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-102822",
	                "name.minikube.sigs.k8s.io": "multinode-102822",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c365116ec3d8c3847cc5a6224f1bf95c642d8cee266ac42fa0fe488c76ef78f7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32867"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32866"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32863"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32864"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c365116ec3d8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-102822": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f4311dd36b9",
	                        "multinode-102822"
	                    ],
	                    "NetworkID": "8d666bf786b0ec1697724ea7b42f362db065de718263087da543d41833d5baef",
	                    "EndpointID": "2cde720be508be7509e0a998f8cb46b885515ba56d7d9484e72335be284c5878",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-102822 -n multinode-102822
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-102822 logs -n 25: (1.510187907s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822:/home/docker/cp-test_multinode-102822-m02_multinode-102822.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822 sudo cat                                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m02_multinode-102822.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03:/home/docker/cp-test_multinode-102822-m02_multinode-102822-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822-m03 sudo cat                                   | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m02_multinode-102822-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp testdata/cp-test.txt                                                | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822:/home/docker/cp-test_multinode-102822-m03_multinode-102822.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822 sudo cat                                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m03_multinode-102822.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02:/home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822-m02 sudo cat                                   | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-102822 node stop m03                                                          | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	| node    | multinode-102822 node start                                                             | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:31 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC |                     |
	| stop    | -p multinode-102822                                                                     | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC | 14 Jan 23 10:31 UTC |
	| start   | -p multinode-102822                                                                     | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC |                     |
	| node    | multinode-102822 node delete                                                            | multinode-102822 | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:31:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:31:56.244080  110500 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:31:56.244253  110500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:31:56.244261  110500 out.go:309] Setting ErrFile to fd 2...
	I0114 10:31:56.244266  110500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:31:56.244366  110500 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:31:56.244882  110500 out.go:303] Setting JSON to false
	I0114 10:31:56.246188  110500 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4464,"bootTime":1673687853,"procs":640,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:31:56.246254  110500 start.go:135] virtualization: kvm guest
	I0114 10:31:56.248786  110500 out.go:177] * [multinode-102822] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:31:56.250375  110500 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:31:56.250301  110500 notify.go:220] Checking for updates...
	I0114 10:31:56.253580  110500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:31:56.255205  110500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:31:56.256807  110500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:31:56.258293  110500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:31:56.260196  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:31:56.260244  110500 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:31:56.288513  110500 docker.go:138] docker version: linux-20.10.22
	I0114 10:31:56.288613  110500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:31:56.380775  110500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:31:56.306666417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:31:56.380877  110500 docker.go:255] overlay module found
	I0114 10:31:56.383058  110500 out.go:177] * Using the docker driver based on existing profile
	I0114 10:31:56.384332  110500 start.go:294] selected driver: docker
	I0114 10:31:56.384350  110500 start.go:838] validating driver "docker" against &{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
}
	I0114 10:31:56.384462  110500 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:31:56.384525  110500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:31:56.478549  110500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:31:56.403841818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:31:56.479153  110500 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:31:56.479180  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:31:56.479187  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:31:56.479205  110500 start_flags.go:319] config:
	{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-ins
taller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:31:56.482752  110500 out.go:177] * Starting control plane node multinode-102822 in cluster multinode-102822
	I0114 10:31:56.484264  110500 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:31:56.485726  110500 out.go:177] * Pulling base image ...
	I0114 10:31:56.487160  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:31:56.487205  110500 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:31:56.487226  110500 cache.go:57] Caching tarball of preloaded images
	I0114 10:31:56.487203  110500 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:31:56.487522  110500 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:31:56.487542  110500 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:31:56.487744  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:31:56.509755  110500 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:31:56.509787  110500 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:31:56.509802  110500 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:31:56.509837  110500 start.go:364] acquiring machines lock for multinode-102822: {Name:mkd70e1f2f35b7e6f7c31ed25602b988985e4fa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:31:56.509932  110500 start.go:368] acquired machines lock for "multinode-102822" in 68.904µs
	I0114 10:31:56.509951  110500 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:31:56.509955  110500 fix.go:55] fixHost starting: 
	I0114 10:31:56.510146  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:31:56.531979  110500 fix.go:103] recreateIfNeeded on multinode-102822: state=Stopped err=<nil>
	W0114 10:31:56.532013  110500 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:31:56.535180  110500 out.go:177] * Restarting existing docker container for "multinode-102822" ...
	I0114 10:31:56.536670  110500 cli_runner.go:164] Run: docker start multinode-102822
	I0114 10:31:56.910511  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:31:56.935016  110500 kic.go:426] container "multinode-102822" state is running.
	I0114 10:31:56.935341  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:31:56.958657  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:31:56.958868  110500 machine.go:88] provisioning docker machine ...
	I0114 10:31:56.958889  110500 ubuntu.go:169] provisioning hostname "multinode-102822"
	I0114 10:31:56.958926  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:31:56.981260  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:31:56.981492  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0114 10:31:56.981520  110500 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102822 && echo "multinode-102822" | sudo tee /etc/hostname
	I0114 10:31:56.982146  110500 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38962->127.0.0.1:32867: read: connection reset by peer
	I0114 10:32:00.107919  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102822
	
	I0114 10:32:00.107984  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.131658  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:00.131837  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0114 10:32:00.131856  110500 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102822/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:32:00.247376  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:32:00.247412  110500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:32:00.247433  110500 ubuntu.go:177] setting up certificates
	I0114 10:32:00.247441  110500 provision.go:83] configureAuth start
	I0114 10:32:00.247481  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:32:00.270071  110500 provision.go:138] copyHostCerts
	I0114 10:32:00.270112  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:00.270162  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:32:00.270173  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:00.270248  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:32:00.270337  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:00.270358  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:32:00.270365  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:00.270400  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:32:00.270455  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:00.270478  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:32:00.270487  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:00.270524  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:32:00.270583  110500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.multinode-102822 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-102822]
	I0114 10:32:00.494150  110500 provision.go:172] copyRemoteCerts
	I0114 10:32:00.494232  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:32:00.494276  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.517022  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.602641  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 10:32:00.602710  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:32:00.619533  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 10:32:00.619601  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0114 10:32:00.635920  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 10:32:00.635984  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 10:32:00.652526  110500 provision.go:86] duration metric: configureAuth took 405.072699ms
	I0114 10:32:00.652560  110500 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:32:00.652742  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:00.652754  110500 machine.go:91] provisioned docker machine in 3.693874899s
	I0114 10:32:00.652761  110500 start.go:300] post-start starting for "multinode-102822" (driver="docker")
	I0114 10:32:00.652767  110500 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:32:00.652803  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:32:00.652841  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.676636  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.758928  110500 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:32:00.761499  110500 command_runner.go:130] > NAME="Ubuntu"
	I0114 10:32:00.761517  110500 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 10:32:00.761524  110500 command_runner.go:130] > ID=ubuntu
	I0114 10:32:00.761532  110500 command_runner.go:130] > ID_LIKE=debian
	I0114 10:32:00.761540  110500 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 10:32:00.761548  110500 command_runner.go:130] > VERSION_ID="20.04"
	I0114 10:32:00.761559  110500 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 10:32:00.761567  110500 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 10:32:00.761572  110500 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 10:32:00.761584  110500 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 10:32:00.761591  110500 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 10:32:00.761595  110500 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 10:32:00.761748  110500 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:32:00.761772  110500 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:32:00.761786  110500 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:32:00.761796  110500 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:32:00.761810  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:32:00.761869  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:32:00.761948  110500 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:32:00.761962  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /etc/ssl/certs/103062.pem
	I0114 10:32:00.762051  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:32:00.768638  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:32:00.785666  110500 start.go:303] post-start completed in 132.893086ms
	I0114 10:32:00.785739  110500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:32:00.785780  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.808883  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.892008  110500 command_runner.go:130] > 18%!
	(MISSING)I0114 10:32:00.892093  110500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:32:00.895774  110500 command_runner.go:130] > 239G
	I0114 10:32:00.895937  110500 fix.go:57] fixHost completed within 4.385975679s
	I0114 10:32:00.895960  110500 start.go:83] releasing machines lock for "multinode-102822", held for 4.386015126s
	I0114 10:32:00.896044  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:32:00.919896  110500 ssh_runner.go:195] Run: cat /version.json
	I0114 10:32:00.919947  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.919973  110500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:32:00.920028  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:00.942987  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:00.946487  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:01.054033  110500 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 10:32:01.054097  110500 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0114 10:32:01.054188  110500 ssh_runner.go:195] Run: systemctl --version
	I0114 10:32:01.057819  110500 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0114 10:32:01.057844  110500 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0114 10:32:01.058053  110500 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:32:01.068862  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:32:01.077997  110500 docker.go:189] disabling docker service ...
	I0114 10:32:01.078119  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:32:01.087867  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:32:01.096584  110500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:32:01.179660  110500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:32:01.257778  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:32:01.266818  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:32:01.278503  110500 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:32:01.278530  110500 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:32:01.279238  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:32:01.286923  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:32:01.294475  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:32:01.302050  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:32:01.309511  110500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:32:01.314863  110500 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0114 10:32:01.315392  110500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:32:01.321309  110500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:32:01.393049  110500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:32:01.455546  110500 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:32:01.455627  110500 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:32:01.458967  110500 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0114 10:32:01.458992  110500 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 10:32:01.458999  110500 command_runner.go:130] > Device: 3fh/63d	Inode: 109         Links: 1
	I0114 10:32:01.459006  110500 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:32:01.459012  110500 command_runner.go:130] > Access: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459016  110500 command_runner.go:130] > Modify: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459023  110500 command_runner.go:130] > Change: 2023-01-14 10:32:01.451146781 +0000
	I0114 10:32:01.459028  110500 command_runner.go:130] >  Birth: -
	I0114 10:32:01.459049  110500 start.go:472] Will wait 60s for crictl version
	I0114 10:32:01.459115  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:32:01.462116  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:32:01.462198  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:32:01.488685  110500 command_runner.go:130] ! time="2023-01-14T10:32:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:32:01.488775  110500 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:32:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:32:12.536033  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:32:12.557499  110500 command_runner.go:130] > Version:  0.1.0
	I0114 10:32:12.557525  110500 command_runner.go:130] > RuntimeName:  containerd
	I0114 10:32:12.557533  110500 command_runner.go:130] > RuntimeVersion:  1.6.10
	I0114 10:32:12.557540  110500 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0114 10:32:12.559041  110500 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:32:12.559089  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:32:12.580521  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:32:12.581939  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:32:12.602970  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:32:12.607003  110500 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:32:12.608552  110500 cli_runner.go:164] Run: docker network inspect multinode-102822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:32:12.630384  110500 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0114 10:32:12.633652  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:32:12.642818  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:32:12.642867  110500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:32:12.665261  110500 command_runner.go:130] > {
	I0114 10:32:12.665286  110500 command_runner.go:130] >   "images": [
	I0114 10:32:12.665292  110500 command_runner.go:130] >     {
	I0114 10:32:12.665303  110500 command_runner.go:130] >       "id": "sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f",
	I0114 10:32:12.665311  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665320  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20221004-44d545d1"
	I0114 10:32:12.665326  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665335  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665344  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"
	I0114 10:32:12.665354  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665359  110500 command_runner.go:130] >       "size": "25830582",
	I0114 10:32:12.665363  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665369  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665374  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665383  110500 command_runner.go:130] >     },
	I0114 10:32:12.665390  110500 command_runner.go:130] >     {
	I0114 10:32:12.665397  110500 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0114 10:32:12.665403  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665409  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0114 10:32:12.665415  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665419  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665426  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0114 10:32:12.665432  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665437  110500 command_runner.go:130] >       "size": "725911",
	I0114 10:32:12.665443  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665448  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665454  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665458  110500 command_runner.go:130] >     },
	I0114 10:32:12.665464  110500 command_runner.go:130] >     {
	I0114 10:32:12.665470  110500 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0114 10:32:12.665477  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665482  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:32:12.665488  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665496  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665509  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0114 10:32:12.665515  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665519  110500 command_runner.go:130] >       "size": "9058936",
	I0114 10:32:12.665526  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665530  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665537  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665540  110500 command_runner.go:130] >     },
	I0114 10:32:12.665546  110500 command_runner.go:130] >     {
	I0114 10:32:12.665553  110500 command_runner.go:130] >       "id": "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a",
	I0114 10:32:12.665561  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665570  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:32:12.665576  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665581  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665590  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"
	I0114 10:32:12.665596  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665600  110500 command_runner.go:130] >       "size": "14837849",
	I0114 10:32:12.665607  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665611  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665617  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665621  110500 command_runner.go:130] >     },
	I0114 10:32:12.665627  110500 command_runner.go:130] >     {
	I0114 10:32:12.665634  110500 command_runner.go:130] >       "id": "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66",
	I0114 10:32:12.665650  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665658  110500 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.4-0"
	I0114 10:32:12.665662  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665668  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665675  110500 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"
	I0114 10:32:12.665681  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665686  110500 command_runner.go:130] >       "size": "102157811",
	I0114 10:32:12.665692  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665696  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665702  110500 command_runner.go:130] >       },
	I0114 10:32:12.665706  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665716  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665724  110500 command_runner.go:130] >     },
	I0114 10:32:12.665731  110500 command_runner.go:130] >     {
	I0114 10:32:12.665737  110500 command_runner.go:130] >       "id": "sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0",
	I0114 10:32:12.665744  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665750  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:32:12.665754  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665760  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665770  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"
	I0114 10:32:12.665776  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665780  110500 command_runner.go:130] >       "size": "34238163",
	I0114 10:32:12.665786  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665790  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665796  110500 command_runner.go:130] >       },
	I0114 10:32:12.665801  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665807  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665810  110500 command_runner.go:130] >     },
	I0114 10:32:12.665816  110500 command_runner.go:130] >     {
	I0114 10:32:12.665823  110500 command_runner.go:130] >       "id": "sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a",
	I0114 10:32:12.665830  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665835  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:32:12.665842  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665846  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665856  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"
	I0114 10:32:12.665862  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665867  110500 command_runner.go:130] >       "size": "31261869",
	I0114 10:32:12.665873  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.665877  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.665887  110500 command_runner.go:130] >       },
	I0114 10:32:12.665891  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665899  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665906  110500 command_runner.go:130] >     },
	I0114 10:32:12.665910  110500 command_runner.go:130] >     {
	I0114 10:32:12.665916  110500 command_runner.go:130] >       "id": "sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041",
	I0114 10:32:12.665920  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.665927  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:32:12.665931  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665937  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.665945  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"
	I0114 10:32:12.665951  110500 command_runner.go:130] >       ],
	I0114 10:32:12.665956  110500 command_runner.go:130] >       "size": "20265805",
	I0114 10:32:12.665962  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.665966  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.665973  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.665978  110500 command_runner.go:130] >     },
	I0114 10:32:12.665984  110500 command_runner.go:130] >     {
	I0114 10:32:12.665990  110500 command_runner.go:130] >       "id": "sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912",
	I0114 10:32:12.665997  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.666002  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:32:12.666008  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666012  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.666022  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"
	I0114 10:32:12.666028  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666032  110500 command_runner.go:130] >       "size": "15798744",
	I0114 10:32:12.666038  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.666042  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.666048  110500 command_runner.go:130] >       },
	I0114 10:32:12.666052  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.666059  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.666063  110500 command_runner.go:130] >     },
	I0114 10:32:12.666069  110500 command_runner.go:130] >     {
	I0114 10:32:12.666075  110500 command_runner.go:130] >       "id": "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517",
	I0114 10:32:12.666083  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.666088  110500 command_runner.go:130] >         "registry.k8s.io/pause:3.8"
	I0114 10:32:12.666094  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666099  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.666108  110500 command_runner.go:130] >         "registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"
	I0114 10:32:12.666114  110500 command_runner.go:130] >       ],
	I0114 10:32:12.666127  110500 command_runner.go:130] >       "size": "311286",
	I0114 10:32:12.666133  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.666138  110500 command_runner.go:130] >         "value": "65535"
	I0114 10:32:12.666143  110500 command_runner.go:130] >       },
	I0114 10:32:12.666148  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.666154  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.666158  110500 command_runner.go:130] >     }
	I0114 10:32:12.666163  110500 command_runner.go:130] >   ]
	I0114 10:32:12.666166  110500 command_runner.go:130] > }
	I0114 10:32:12.666314  110500 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:32:12.666326  110500 containerd.go:467] Images already preloaded, skipping extraction
	I0114 10:32:12.666364  110500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:32:12.687374  110500 command_runner.go:130] > {
	I0114 10:32:12.687399  110500 command_runner.go:130] >   "images": [
	I0114 10:32:12.687405  110500 command_runner.go:130] >     {
	I0114 10:32:12.687415  110500 command_runner.go:130] >       "id": "sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f",
	I0114 10:32:12.687421  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687428  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20221004-44d545d1"
	I0114 10:32:12.687433  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687440  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687460  110500 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"
	I0114 10:32:12.687475  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687483  110500 command_runner.go:130] >       "size": "25830582",
	I0114 10:32:12.687490  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687496  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687503  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687509  110500 command_runner.go:130] >     },
	I0114 10:32:12.687514  110500 command_runner.go:130] >     {
	I0114 10:32:12.687523  110500 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0114 10:32:12.687533  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687545  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0114 10:32:12.687554  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687564  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687580  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0114 10:32:12.687590  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687600  110500 command_runner.go:130] >       "size": "725911",
	I0114 10:32:12.687609  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687617  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687623  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687632  110500 command_runner.go:130] >     },
	I0114 10:32:12.687644  110500 command_runner.go:130] >     {
	I0114 10:32:12.687658  110500 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0114 10:32:12.687668  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687695  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:32:12.687702  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687713  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687734  110500 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0114 10:32:12.687743  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687753  110500 command_runner.go:130] >       "size": "9058936",
	I0114 10:32:12.687763  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687771  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687781  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687797  110500 command_runner.go:130] >     },
	I0114 10:32:12.687804  110500 command_runner.go:130] >     {
	I0114 10:32:12.687818  110500 command_runner.go:130] >       "id": "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a",
	I0114 10:32:12.687828  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687839  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:32:12.687848  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687858  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687869  110500 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"
	I0114 10:32:12.687877  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687884  110500 command_runner.go:130] >       "size": "14837849",
	I0114 10:32:12.687895  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.687903  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.687913  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.687922  110500 command_runner.go:130] >     },
	I0114 10:32:12.687930  110500 command_runner.go:130] >     {
	I0114 10:32:12.687946  110500 command_runner.go:130] >       "id": "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66",
	I0114 10:32:12.687956  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.687965  110500 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.4-0"
	I0114 10:32:12.687969  110500 command_runner.go:130] >       ],
	I0114 10:32:12.687976  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.687991  110500 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"
	I0114 10:32:12.688000  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688008  110500 command_runner.go:130] >       "size": "102157811",
	I0114 10:32:12.688018  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688027  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688037  110500 command_runner.go:130] >       },
	I0114 10:32:12.688046  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688060  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688069  110500 command_runner.go:130] >     },
	I0114 10:32:12.688075  110500 command_runner.go:130] >     {
	I0114 10:32:12.688086  110500 command_runner.go:130] >       "id": "sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0",
	I0114 10:32:12.688096  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688106  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:32:12.688116  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688126  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688141  110500 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"
	I0114 10:32:12.688151  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688161  110500 command_runner.go:130] >       "size": "34238163",
	I0114 10:32:12.688168  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688174  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688181  110500 command_runner.go:130] >       },
	I0114 10:32:12.688192  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688199  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688208  110500 command_runner.go:130] >     },
	I0114 10:32:12.688217  110500 command_runner.go:130] >     {
	I0114 10:32:12.688228  110500 command_runner.go:130] >       "id": "sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a",
	I0114 10:32:12.688238  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688250  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:32:12.688258  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688266  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688278  110500 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"
	I0114 10:32:12.688287  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688295  110500 command_runner.go:130] >       "size": "31261869",
	I0114 10:32:12.688304  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688314  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688323  110500 command_runner.go:130] >       },
	I0114 10:32:12.688333  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688342  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688352  110500 command_runner.go:130] >     },
	I0114 10:32:12.688361  110500 command_runner.go:130] >     {
	I0114 10:32:12.688374  110500 command_runner.go:130] >       "id": "sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041",
	I0114 10:32:12.688387  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688396  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:32:12.688402  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688408  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688418  110500 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"
	I0114 10:32:12.688431  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688439  110500 command_runner.go:130] >       "size": "20265805",
	I0114 10:32:12.688447  110500 command_runner.go:130] >       "uid": null,
	I0114 10:32:12.688455  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688465  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688471  110500 command_runner.go:130] >     },
	I0114 10:32:12.688484  110500 command_runner.go:130] >     {
	I0114 10:32:12.688495  110500 command_runner.go:130] >       "id": "sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912",
	I0114 10:32:12.688502  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688511  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:32:12.688520  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688527  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688542  110500 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"
	I0114 10:32:12.688551  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688558  110500 command_runner.go:130] >       "size": "15798744",
	I0114 10:32:12.688566  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688571  110500 command_runner.go:130] >         "value": "0"
	I0114 10:32:12.688580  110500 command_runner.go:130] >       },
	I0114 10:32:12.688587  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688597  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688603  110500 command_runner.go:130] >     },
	I0114 10:32:12.688613  110500 command_runner.go:130] >     {
	I0114 10:32:12.688626  110500 command_runner.go:130] >       "id": "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517",
	I0114 10:32:12.688636  110500 command_runner.go:130] >       "repoTags": [
	I0114 10:32:12.688644  110500 command_runner.go:130] >         "registry.k8s.io/pause:3.8"
	I0114 10:32:12.688653  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688661  110500 command_runner.go:130] >       "repoDigests": [
	I0114 10:32:12.688677  110500 command_runner.go:130] >         "registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"
	I0114 10:32:12.688684  110500 command_runner.go:130] >       ],
	I0114 10:32:12.688742  110500 command_runner.go:130] >       "size": "311286",
	I0114 10:32:12.688753  110500 command_runner.go:130] >       "uid": {
	I0114 10:32:12.688761  110500 command_runner.go:130] >         "value": "65535"
	I0114 10:32:12.688767  110500 command_runner.go:130] >       },
	I0114 10:32:12.688775  110500 command_runner.go:130] >       "username": "",
	I0114 10:32:12.688790  110500 command_runner.go:130] >       "spec": null
	I0114 10:32:12.688801  110500 command_runner.go:130] >     }
	I0114 10:32:12.688808  110500 command_runner.go:130] >   ]
	I0114 10:32:12.688813  110500 command_runner.go:130] > }
	I0114 10:32:12.689381  110500 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:32:12.689398  110500 cache_images.go:84] Images are preloaded, skipping loading
	I0114 10:32:12.689437  110500 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:32:12.710587  110500 command_runner.go:130] > {
	I0114 10:32:12.710608  110500 command_runner.go:130] >   "status": {
	I0114 10:32:12.710615  110500 command_runner.go:130] >     "conditions": [
	I0114 10:32:12.710621  110500 command_runner.go:130] >       {
	I0114 10:32:12.710628  110500 command_runner.go:130] >         "type": "RuntimeReady",
	I0114 10:32:12.710634  110500 command_runner.go:130] >         "status": true,
	I0114 10:32:12.710640  110500 command_runner.go:130] >         "reason": "",
	I0114 10:32:12.710646  110500 command_runner.go:130] >         "message": ""
	I0114 10:32:12.710651  110500 command_runner.go:130] >       },
	I0114 10:32:12.710657  110500 command_runner.go:130] >       {
	I0114 10:32:12.710668  110500 command_runner.go:130] >         "type": "NetworkReady",
	I0114 10:32:12.710677  110500 command_runner.go:130] >         "status": true,
	I0114 10:32:12.710687  110500 command_runner.go:130] >         "reason": "",
	I0114 10:32:12.710696  110500 command_runner.go:130] >         "message": ""
	I0114 10:32:12.710705  110500 command_runner.go:130] >       }
	I0114 10:32:12.710713  110500 command_runner.go:130] >     ]
	I0114 10:32:12.710720  110500 command_runner.go:130] >   },
	I0114 10:32:12.710728  110500 command_runner.go:130] >   "cniconfig": {
	I0114 10:32:12.710738  110500 command_runner.go:130] >     "PluginDirs": [
	I0114 10:32:12.710749  110500 command_runner.go:130] >       "/opt/cni/bin"
	I0114 10:32:12.710758  110500 command_runner.go:130] >     ],
	I0114 10:32:12.710773  110500 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.mk",
	I0114 10:32:12.710784  110500 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0114 10:32:12.710792  110500 command_runner.go:130] >     "Prefix": "eth",
	I0114 10:32:12.710802  110500 command_runner.go:130] >     "Networks": [
	I0114 10:32:12.710812  110500 command_runner.go:130] >       {
	I0114 10:32:12.710820  110500 command_runner.go:130] >         "Config": {
	I0114 10:32:12.710835  110500 command_runner.go:130] >           "Name": "cni-loopback",
	I0114 10:32:12.710847  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:32:12.710856  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:32:12.710866  110500 command_runner.go:130] >             {
	I0114 10:32:12.710875  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.710886  110500 command_runner.go:130] >                 "type": "loopback",
	I0114 10:32:12.710896  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:32:12.710902  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.710907  110500 command_runner.go:130] >               },
	I0114 10:32:12.710917  110500 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0114 10:32:12.710927  110500 command_runner.go:130] >             }
	I0114 10:32:12.710936  110500 command_runner.go:130] >           ],
	I0114 10:32:12.710949  110500 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0114 10:32:12.710956  110500 command_runner.go:130] >         },
	I0114 10:32:12.710967  110500 command_runner.go:130] >         "IFName": "lo"
	I0114 10:32:12.710977  110500 command_runner.go:130] >       },
	I0114 10:32:12.710986  110500 command_runner.go:130] >       {
	I0114 10:32:12.710994  110500 command_runner.go:130] >         "Config": {
	I0114 10:32:12.711008  110500 command_runner.go:130] >           "Name": "kindnet",
	I0114 10:32:12.711018  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:32:12.711025  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:32:12.711035  110500 command_runner.go:130] >             {
	I0114 10:32:12.711044  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.711055  110500 command_runner.go:130] >                 "type": "ptp",
	I0114 10:32:12.711066  110500 command_runner.go:130] >                 "ipam": {
	I0114 10:32:12.711078  110500 command_runner.go:130] >                   "type": "host-local"
	I0114 10:32:12.711088  110500 command_runner.go:130] >                 },
	I0114 10:32:12.711096  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.711106  110500 command_runner.go:130] >               },
	I0114 10:32:12.711127  110500 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0114 10:32:12.711140  110500 command_runner.go:130] >             },
	I0114 10:32:12.711150  110500 command_runner.go:130] >             {
	I0114 10:32:12.711159  110500 command_runner.go:130] >               "Network": {
	I0114 10:32:12.711172  110500 command_runner.go:130] >                 "type": "portmap",
	I0114 10:32:12.711183  110500 command_runner.go:130] >                 "capabilities": {
	I0114 10:32:12.711194  110500 command_runner.go:130] >                   "portMappings": true
	I0114 10:32:12.711201  110500 command_runner.go:130] >                 },
	I0114 10:32:12.711211  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:32:12.711223  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:32:12.711231  110500 command_runner.go:130] >               },
	I0114 10:32:12.711245  110500 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0114 10:32:12.711255  110500 command_runner.go:130] >             }
	I0114 10:32:12.711263  110500 command_runner.go:130] >           ],
	I0114 10:32:12.711307  110500 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0114 10:32:12.711317  110500 command_runner.go:130] >         },
	I0114 10:32:12.711325  110500 command_runner.go:130] >         "IFName": "eth0"
	I0114 10:32:12.711331  110500 command_runner.go:130] >       }
	I0114 10:32:12.711337  110500 command_runner.go:130] >     ]
	I0114 10:32:12.711348  110500 command_runner.go:130] >   },
	I0114 10:32:12.711358  110500 command_runner.go:130] >   "config": {
	I0114 10:32:12.711366  110500 command_runner.go:130] >     "containerd": {
	I0114 10:32:12.711377  110500 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0114 10:32:12.711388  110500 command_runner.go:130] >       "defaultRuntimeName": "default",
	I0114 10:32:12.711399  110500 command_runner.go:130] >       "defaultRuntime": {
	I0114 10:32:12.711409  110500 command_runner.go:130] >         "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711419  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:32:12.711430  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:32:12.711438  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:32:12.711449  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:32:12.711461  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:32:12.711472  110500 command_runner.go:130] >         "options": null,
	I0114 10:32:12.711484  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:32:12.711495  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:32:12.711504  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:32:12.711513  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:32:12.711522  110500 command_runner.go:130] >       },
	I0114 10:32:12.711531  110500 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0114 10:32:12.711542  110500 command_runner.go:130] >         "runtimeType": "",
	I0114 10:32:12.711553  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:32:12.711563  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:32:12.711575  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:32:12.711586  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:32:12.711597  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:32:12.711607  110500 command_runner.go:130] >         "options": null,
	I0114 10:32:12.711616  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:32:12.711627  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:32:12.711638  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:32:12.711649  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:32:12.711658  110500 command_runner.go:130] >       },
	I0114 10:32:12.711684  110500 command_runner.go:130] >       "runtimes": {
	I0114 10:32:12.711694  110500 command_runner.go:130] >         "default": {
	I0114 10:32:12.711706  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711717  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:32:12.711728  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:32:12.711739  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:32:12.711751  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:32:12.711762  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:32:12.711781  110500 command_runner.go:130] >           "options": null,
	I0114 10:32:12.711794  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:32:12.711805  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:32:12.711816  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:32:12.711826  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:32:12.711837  110500 command_runner.go:130] >         },
	I0114 10:32:12.711848  110500 command_runner.go:130] >         "runc": {
	I0114 10:32:12.711861  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:32:12.711871  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:32:12.711879  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:32:12.711890  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:32:12.711902  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:32:12.711912  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:32:12.711923  110500 command_runner.go:130] >           "options": {
	I0114 10:32:12.711975  110500 command_runner.go:130] >             "SystemdCgroup": false
	I0114 10:32:12.711989  110500 command_runner.go:130] >           },
	I0114 10:32:12.711998  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:32:12.712006  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:32:12.712017  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:32:12.712028  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:32:12.712036  110500 command_runner.go:130] >         }
	I0114 10:32:12.712045  110500 command_runner.go:130] >       },
	I0114 10:32:12.712057  110500 command_runner.go:130] >       "noPivot": false,
	I0114 10:32:12.712068  110500 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0114 10:32:12.712078  110500 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0114 10:32:12.712089  110500 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0114 10:32:12.712099  110500 command_runner.go:130] >     },
	I0114 10:32:12.712107  110500 command_runner.go:130] >     "cni": {
	I0114 10:32:12.712118  110500 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0114 10:32:12.712130  110500 command_runner.go:130] >       "confDir": "/etc/cni/net.mk",
	I0114 10:32:12.712140  110500 command_runner.go:130] >       "maxConfNum": 1,
	I0114 10:32:12.712151  110500 command_runner.go:130] >       "confTemplate": "",
	I0114 10:32:12.712161  110500 command_runner.go:130] >       "ipPref": ""
	I0114 10:32:12.712170  110500 command_runner.go:130] >     },
	I0114 10:32:12.712177  110500 command_runner.go:130] >     "registry": {
	I0114 10:32:12.712189  110500 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0114 10:32:12.712199  110500 command_runner.go:130] >       "mirrors": null,
	I0114 10:32:12.712209  110500 command_runner.go:130] >       "configs": null,
	I0114 10:32:12.712220  110500 command_runner.go:130] >       "auths": null,
	I0114 10:32:12.712232  110500 command_runner.go:130] >       "headers": null
	I0114 10:32:12.712242  110500 command_runner.go:130] >     },
	I0114 10:32:12.712251  110500 command_runner.go:130] >     "imageDecryption": {
	I0114 10:32:12.712261  110500 command_runner.go:130] >       "keyModel": "node"
	I0114 10:32:12.712267  110500 command_runner.go:130] >     },
	I0114 10:32:12.712274  110500 command_runner.go:130] >     "disableTCPService": true,
	I0114 10:32:12.712281  110500 command_runner.go:130] >     "streamServerAddress": "",
	I0114 10:32:12.712292  110500 command_runner.go:130] >     "streamServerPort": "10010",
	I0114 10:32:12.712303  110500 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0114 10:32:12.712312  110500 command_runner.go:130] >     "enableSelinux": false,
	I0114 10:32:12.712324  110500 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0114 10:32:12.712337  110500 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.8",
	I0114 10:32:12.712348  110500 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0114 10:32:12.712360  110500 command_runner.go:130] >     "systemdCgroup": false,
	I0114 10:32:12.712368  110500 command_runner.go:130] >     "enableTLSStreaming": false,
	I0114 10:32:12.712379  110500 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0114 10:32:12.712388  110500 command_runner.go:130] >       "tlsCertFile": "",
	I0114 10:32:12.712398  110500 command_runner.go:130] >       "tlsKeyFile": ""
	I0114 10:32:12.712407  110500 command_runner.go:130] >     },
	I0114 10:32:12.712417  110500 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0114 10:32:12.712427  110500 command_runner.go:130] >     "disableCgroup": false,
	I0114 10:32:12.712436  110500 command_runner.go:130] >     "disableApparmor": false,
	I0114 10:32:12.712446  110500 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0114 10:32:12.712455  110500 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0114 10:32:12.712466  110500 command_runner.go:130] >     "disableProcMount": false,
	I0114 10:32:12.712477  110500 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0114 10:32:12.712487  110500 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0114 10:32:12.712496  110500 command_runner.go:130] >     "disableHugetlbController": true,
	I0114 10:32:12.712508  110500 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0114 10:32:12.712519  110500 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0114 10:32:12.712530  110500 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0114 10:32:12.712544  110500 command_runner.go:130] >     "enableUnprivilegedPorts": false,
	I0114 10:32:12.712557  110500 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0114 10:32:12.712569  110500 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0114 10:32:12.712582  110500 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0114 10:32:12.712594  110500 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0114 10:32:12.712607  110500 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0114 10:32:12.712615  110500 command_runner.go:130] >   },
	I0114 10:32:12.712623  110500 command_runner.go:130] >   "golang": "go1.18.8",
	I0114 10:32:12.712635  110500 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0114 10:32:12.712647  110500 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0114 10:32:12.712656  110500 command_runner.go:130] > }
	I0114 10:32:12.712858  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:32:12.712872  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:32:12.712887  110500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:32:12.712904  110500 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102822 NodeName:multinode-102822 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:32:12.713036  110500 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-102822"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:32:12.713135  110500 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-102822 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:32:12.713190  110500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:32:12.719488  110500 command_runner.go:130] > kubeadm
	I0114 10:32:12.719509  110500 command_runner.go:130] > kubectl
	I0114 10:32:12.719515  110500 command_runner.go:130] > kubelet
	I0114 10:32:12.720035  110500 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:32:12.720098  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:32:12.726696  110500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0114 10:32:12.738909  110500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:32:12.751222  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes)
	I0114 10:32:12.763791  110500 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:32:12.766553  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:32:12.775632  110500 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822 for IP: 192.168.58.2
	I0114 10:32:12.775780  110500 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:32:12.775823  110500 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:32:12.775880  110500 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key
	I0114 10:32:12.775939  110500 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key.cee25041
	I0114 10:32:12.775975  110500 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key
	I0114 10:32:12.775986  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0114 10:32:12.775995  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0114 10:32:12.776009  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0114 10:32:12.776020  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0114 10:32:12.776030  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 10:32:12.776040  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 10:32:12.776050  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:32:12.776060  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:32:12.776095  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:32:12.776118  110500 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:32:12.776127  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:32:12.776146  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:32:12.776170  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:32:12.776190  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:32:12.776223  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:32:12.776254  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem -> /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.776268  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /usr/share/ca-certificates/103062.pem
	I0114 10:32:12.776276  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:12.776801  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:32:12.793649  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:32:12.809955  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:32:12.826165  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:32:12.842333  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:32:12.858766  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:32:12.874864  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:32:12.891037  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:32:12.907157  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:32:12.923498  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:32:12.940509  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:32:12.957076  110500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:32:12.969247  110500 ssh_runner.go:195] Run: openssl version
	I0114 10:32:12.973757  110500 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 10:32:12.973888  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:32:12.980925  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983712  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983767  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.983798  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:32:12.988253  110500 command_runner.go:130] > 51391683
	I0114 10:32:12.988302  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:32:12.994808  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:32:13.001692  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004632  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004666  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.004710  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:32:13.009155  110500 command_runner.go:130] > 3ec20f2e
	I0114 10:32:13.009284  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:32:13.015757  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:32:13.022799  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025630  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025669  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.025717  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:32:13.030092  110500 command_runner.go:130] > b5213941
	I0114 10:32:13.030263  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:32:13.036698  110500 kubeadm.go:396] StartCluster: {Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:fals
e logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:32:13.036791  110500 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:32:13.036836  110500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:32:13.058223  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:13.058244  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:13.058251  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:13.058260  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:13.058269  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:13.058277  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:13.058286  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:13.058300  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:13.060045  110500 cri.go:87] found id: "8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	I0114 10:32:13.060064  110500 cri.go:87] found id: "dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653"
	I0114 10:32:13.060072  110500 cri.go:87] found id: "fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642"
	I0114 10:32:13.060078  110500 cri.go:87] found id: "6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2"
	I0114 10:32:13.060084  110500 cri.go:87] found id: "1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028"
	I0114 10:32:13.060094  110500 cri.go:87] found id: "9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:32:13.060102  110500 cri.go:87] found id: "72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf"
	I0114 10:32:13.060109  110500 cri.go:87] found id: "1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2"
	I0114 10:32:13.060124  110500 cri.go:87] found id: ""
	I0114 10:32:13.060170  110500 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0114 10:32:13.070971  110500 command_runner.go:130] > null
	I0114 10:32:13.071003  110500 cri.go:114] JSON = null
	W0114 10:32:13.071044  110500 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0114 10:32:13.071091  110500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:32:13.077185  110500 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0114 10:32:13.077211  110500 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0114 10:32:13.077219  110500 command_runner.go:130] > /var/lib/minikube/etcd:
	I0114 10:32:13.077224  110500 command_runner.go:130] > member
	I0114 10:32:13.077710  110500 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:32:13.077727  110500 kubeadm.go:627] restartCluster start
	I0114 10:32:13.077773  110500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:32:13.083937  110500 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.084292  110500 kubeconfig.go:135] verify returned: extract IP: "multinode-102822" does not appear in /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:13.084399  110500 kubeconfig.go:146] "multinode-102822" context is missing from /home/jenkins/minikube-integration/15642-3818/kubeconfig - will repair!
	I0114 10:32:13.084667  110500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:13.085127  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:13.085339  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:13.085718  110500 cert_rotation.go:137] Starting client certificate rotation controller
	I0114 10:32:13.085897  110500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:32:13.092314  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.092361  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.099983  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.300390  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.300471  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.309061  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.500394  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.500496  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.508843  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.700076  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.700158  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.708648  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:13.900982  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:13.901059  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:13.909312  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.100665  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.100752  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.108909  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.300163  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.300255  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.308427  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.500734  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.500820  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.509529  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.700875  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.700944  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.709372  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:14.900776  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:14.900858  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:14.909023  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.100211  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.100291  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.108635  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.300982  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.301056  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.309251  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.500594  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.500686  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.509036  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.700390  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.700477  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.709024  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:15.900241  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:15.900308  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:15.908659  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.101018  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:16.101096  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:16.109426  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.109444  110500 api_server.go:165] Checking apiserver status ...
	I0114 10:32:16.109480  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 10:32:16.117151  110500 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.117179  110500 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 10:32:16.117187  110500 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:32:16.117204  110500 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:32:16.117249  110500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:32:16.139034  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:16.139057  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:16.139065  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:16.139074  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:16.139082  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:16.139089  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:16.139097  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:16.139108  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.140873  110500 cri.go:87] found id: "8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	I0114 10:32:16.140897  110500 cri.go:87] found id: "dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653"
	I0114 10:32:16.140903  110500 cri.go:87] found id: "fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642"
	I0114 10:32:16.140909  110500 cri.go:87] found id: "6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2"
	I0114 10:32:16.140915  110500 cri.go:87] found id: "1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028"
	I0114 10:32:16.140925  110500 cri.go:87] found id: "9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:32:16.140940  110500 cri.go:87] found id: "72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf"
	I0114 10:32:16.140949  110500 cri.go:87] found id: "1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2"
	I0114 10:32:16.140963  110500 cri.go:87] found id: ""
	I0114 10:32:16.140974  110500 cri.go:232] Stopping containers: [8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653 fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2]
	I0114 10:32:16.141047  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:32:16.143873  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:32:16.143954  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653 fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.164186  110500 command_runner.go:130] > 8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f
	I0114 10:32:16.164545  110500 command_runner.go:130] > dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653
	I0114 10:32:16.164961  110500 command_runner.go:130] > fa1fbfcc6ff2a41b8faca2bf03317a3f0774ab3fc1b40ad0529d15da2dadb642
	I0114 10:32:16.165442  110500 command_runner.go:130] > 6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2
	I0114 10:32:16.165866  110500 command_runner.go:130] > 1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028
	I0114 10:32:16.166173  110500 command_runner.go:130] > 9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22
	I0114 10:32:16.166530  110500 command_runner.go:130] > 72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf
	I0114 10:32:16.166912  110500 command_runner.go:130] > 1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2
	I0114 10:32:16.168495  110500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:32:16.178319  110500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:32:16.184529  110500 command_runner.go:130] > -rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	I0114 10:32:16.184551  110500 command_runner.go:130] > -rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.184560  110500 command_runner.go:130] > -rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	I0114 10:32:16.184578  110500 command_runner.go:130] > -rw------- 1 root root 5604 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	I0114 10:32:16.185073  110500 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	
	I0114 10:32:16.185115  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:32:16.191129  110500 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 10:32:16.191775  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:32:16.197622  110500 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 10:32:16.198190  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.204644  110500 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.204691  110500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 10:32:16.211037  110500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:32:16.217158  110500 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:32:16.217202  110500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 10:32:16.223266  110500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:32:16.229757  110500 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:32:16.229774  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:16.267698  110500 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:32:16.267727  110500 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0114 10:32:16.267829  110500 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0114 10:32:16.268005  110500 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 10:32:16.268198  110500 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0114 10:32:16.268305  110500 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0114 10:32:16.268411  110500 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0114 10:32:16.268622  110500 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0114 10:32:16.268811  110500 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0114 10:32:16.269005  110500 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 10:32:16.269151  110500 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 10:32:16.269270  110500 command_runner.go:130] > [certs] Using the existing "sa" key
	I0114 10:32:16.271593  110500 command_runner.go:130] ! W0114 10:32:16.262790     715 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:16.271629  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:16.308452  110500 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:32:16.591661  110500 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0114 10:32:16.834807  110500 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0114 10:32:16.917085  110500 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:32:16.963606  110500 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:32:16.966257  110500 command_runner.go:130] ! W0114 10:32:16.303745     726 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:16.966297  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.014855  110500 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:32:17.015614  110500 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:32:17.015700  110500 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 10:32:17.097994  110500 command_runner.go:130] ! W0114 10:32:16.998756     739 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.098089  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.134582  110500 command_runner.go:130] ! W0114 10:32:17.134094     774 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.147442  110500 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:32:17.147476  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:32:17.147487  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:32:17.147503  110500 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:32:17.147521  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:17.236640  110500 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:32:17.242814  110500 command_runner.go:130] ! W0114 10:32:17.230809     792 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:17.242849  110500 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:32:17.242894  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:17.752564  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:18.252426  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:18.260999  110500 command_runner.go:130] > 1113
	I0114 10:32:18.261657  110500 api_server.go:71] duration metric: took 1.01880732s to wait for apiserver process to appear ...
	I0114 10:32:18.261681  110500 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:32:18.261693  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:21.029950  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 10:32:21.029985  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 10:32:21.530625  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:21.535017  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:32:21.535038  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:32:22.030583  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:22.034640  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:32:22.034667  110500 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:32:22.530186  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:22.535299  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:32:22.535363  110500 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0114 10:32:22.535370  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:22.535378  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:22.535387  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:22.542430  110500 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0114 10:32:22.542456  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:22.542463  110500 round_trippers.go:580]     Audit-Id: 1aad19f3-6767-4611-a5ba-372dd35e9aaa
	I0114 10:32:22.542469  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:22.542478  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:22.542486  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:22.542495  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:22.542501  110500 round_trippers.go:580]     Content-Length: 263
	I0114 10:32:22.542510  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 10:32:22.542548  110500 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 10:32:22.542642  110500 api_server.go:140] control plane version: v1.25.3
	I0114 10:32:22.542659  110500 api_server.go:130] duration metric: took 4.280973232s to wait for apiserver health ...
	I0114 10:32:22.542670  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:32:22.542681  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:32:22.544760  110500 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 10:32:22.546388  110500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:32:22.549910  110500 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 10:32:22.549962  110500 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 10:32:22.549975  110500 command_runner.go:130] > Device: 34h/52d	Inode: 565966      Links: 1
	I0114 10:32:22.549983  110500 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:32:22.549994  110500 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:32:22.550002  110500 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:32:22.550010  110500 command_runner.go:130] > Change: 2023-01-14 10:06:59.488187836 +0000
	I0114 10:32:22.550015  110500 command_runner.go:130] >  Birth: -
	I0114 10:32:22.550073  110500 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:32:22.550086  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:32:22.563145  110500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:32:23.556622  110500 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:32:23.558324  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:32:23.559964  110500 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 10:32:23.572522  110500 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 10:32:23.576291  110500 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.013116951s)
	I0114 10:32:23.576318  110500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:32:23.576411  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:23.576418  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.576426  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.576434  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.580021  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:23.580051  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.580062  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.580070  110500 round_trippers.go:580]     Audit-Id: d0301af9-af93-4972-a603-26d225a78b49
	I0114 10:32:23.580078  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.580086  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.580101  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.580109  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.580814  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"684"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84105 chars]
	I0114 10:32:23.586341  110500 system_pods.go:59] 12 kube-system pods found
	I0114 10:32:23.586386  110500 system_pods.go:61] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:23.586402  110500 system_pods.go:61] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 10:32:23.586417  110500 system_pods.go:61] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:23.586424  110500 system_pods.go:61] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:23.586434  110500 system_pods.go:61] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0114 10:32:23.586442  110500 system_pods.go:61] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:23.586451  110500 system_pods.go:61] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 10:32:23.586463  110500 system_pods.go:61] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:23.586473  110500 system_pods.go:61] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:23.586480  110500 system_pods.go:61] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:23.586490  110500 system_pods.go:61] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:23.586499  110500 system_pods.go:61] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running
	I0114 10:32:23.586505  110500 system_pods.go:74] duration metric: took 10.181942ms to wait for pod list to return data ...
	I0114 10:32:23.586518  110500 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:32:23.586586  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:23.586597  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.586606  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.586613  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.588779  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.588796  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.588806  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.588815  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.588826  110500 round_trippers.go:580]     Audit-Id: 6ef374f5-6c43-4398-b878-dabcf026fa21
	I0114 10:32:23.588834  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.588845  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.588857  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.589115  110500 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"684"},"items":[{"metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15962 chars]
	I0114 10:32:23.589921  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589939  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589950  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589954  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589958  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:23.589961  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:23.589966  110500 node_conditions.go:105] duration metric: took 3.442775ms to run NodePressure ...
	I0114 10:32:23.589987  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:32:23.691341  110500 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0114 10:32:23.734463  110500 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0114 10:32:23.736834  110500 command_runner.go:130] ! W0114 10:32:23.629296    1801 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:32:23.736875  110500 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 10:32:23.736963  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0114 10:32:23.736973  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.736985  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.736994  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.739456  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.739481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.739491  110500 round_trippers.go:580]     Audit-Id: c46d33b0-2b93-4009-a7b0-a83f39889d32
	I0114 10:32:23.739500  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.739509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.739522  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.739534  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.739546  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.739840  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"686"},"items":[{"metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30422 chars]
	I0114 10:32:23.740800  110500 kubeadm.go:778] kubelet initialised
	I0114 10:32:23.740814  110500 kubeadm.go:779] duration metric: took 3.928883ms waiting for restarted kubelet to initialise ...
	I0114 10:32:23.740821  110500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:23.740868  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:23.740876  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.740885  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.740894  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.743447  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:23.743463  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.743469  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.743485  110500 round_trippers.go:580]     Audit-Id: c6bded4f-4aa4-42be-a4e4-20ebe0546a46
	I0114 10:32:23.743493  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.743500  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.743509  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.743518  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.744192  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"686"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84105 chars]
	I0114 10:32:23.746598  110500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.746657  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:23.746664  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.746672  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.746681  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.748217  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.748238  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.748245  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.748253  110500 round_trippers.go:580]     Audit-Id: d1d08fdc-7445-4898-b7eb-6476beda912d
	I0114 10:32:23.748262  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.748274  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.748286  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.748294  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.748395  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"408","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6329 chars]
	I0114 10:32:23.748815  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:23.748828  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.748835  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.748845  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.750211  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.750225  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.750232  110500 round_trippers.go:580]     Audit-Id: a95fc07c-b593-46bf-8f30-63ff02257647
	I0114 10:32:23.750240  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.750248  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.750257  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.750272  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.750283  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.750383  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:23.750658  110500 pod_ready.go:92] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:23.750671  110500 pod_ready.go:81] duration metric: took 4.054192ms waiting for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.750678  110500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:23.750715  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:23.750722  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.750729  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.750734  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.752197  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.752212  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.752220  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.752226  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.752231  110500 round_trippers.go:580]     Audit-Id: e9cb9640-9e23-42f8-94a1-c70e896a63a2
	I0114 10:32:23.752237  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.752246  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.752262  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.752376  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:23.752697  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:23.752709  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:23.752716  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:23.752722  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:23.754027  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:23.754047  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:23.754056  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 10:32:23.754064  110500 round_trippers.go:580]     Audit-Id: 83d20878-ef82-4af2-a8ed-28c1b8299d89
	I0114 10:32:23.754073  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:23.754085  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:23.754093  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:23.754103  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:23.754191  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:24.255300  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:24.255338  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.255348  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.255355  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.257483  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:24.257502  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.257509  110500 round_trippers.go:580]     Audit-Id: d39e3692-47b7-4e86-ada1-da6bc3a167a8
	I0114 10:32:24.257517  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.257526  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.257536  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.257548  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.257562  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.257672  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:24.258113  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:24.258127  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.258138  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.258147  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.259968  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.259991  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.260002  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.260012  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.260020  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.260026  110500 round_trippers.go:580]     Audit-Id: a0b0bcf0-c6b9-4f99-aedf-f72364dcfbaf
	I0114 10:32:24.260033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.260047  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.260220  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:24.754715  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:24.754736  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.754744  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.754750  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.756730  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.756773  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.756783  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.756791  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.756800  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.756811  110500 round_trippers.go:580]     Audit-Id: df0a6fa2-a310-40fd-976c-91df137ef1ec
	I0114 10:32:24.756823  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.756832  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.756952  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:24.757338  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:24.757350  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:24.757357  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:24.757363  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:24.758957  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:24.758976  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:24.758984  110500 round_trippers.go:580]     Audit-Id: d4a78893-a75c-47a5-9145-000537d9e421
	I0114 10:32:24.758993  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:24.759002  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:24.759013  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:24.759023  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:24.759039  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 10:32:24.759147  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.254722  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:25.254743  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.254751  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.254758  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.256711  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.256735  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.256747  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.256756  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.256765  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.256773  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.256785  110500 round_trippers.go:580]     Audit-Id: b0b46cf3-6548-4625-889c-7ab1f6b91f5f
	I0114 10:32:25.256797  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.256916  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:25.257361  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:25.257375  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.257382  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.257392  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.258984  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.259007  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.259013  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.259019  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.259027  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.259036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.259052  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.259060  110500 round_trippers.go:580]     Audit-Id: 055c2428-6dda-4feb-871a-f78137c59674
	I0114 10:32:25.259182  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.754723  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:25.754750  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.754758  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.754764  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.756888  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:25.756905  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.756912  110500 round_trippers.go:580]     Audit-Id: 5ead797e-e2a7-46ae-8b8e-71c88b5db5b4
	I0114 10:32:25.756917  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.756923  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.756932  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.756941  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.756975  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.757091  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:25.757536  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:25.757549  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:25.757558  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:25.757564  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:25.759116  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:25.759139  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:25.759149  110500 round_trippers.go:580]     Audit-Id: c137e88b-c781-4c64-bb12-5a1558b3c42d
	I0114 10:32:25.759158  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:25.759166  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:25.759173  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:25.759181  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:25.759186  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 10:32:25.759337  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:25.759698  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:26.255190  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:26.255211  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.255222  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.255230  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.257295  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:26.257321  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.257331  110500 round_trippers.go:580]     Audit-Id: 4118a105-2c79-4fa0-a2d9-f41e62a1936d
	I0114 10:32:26.257341  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.257349  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.257356  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.257361  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.257366  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.257473  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:26.257881  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:26.257893  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.257900  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.257906  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.259401  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:26.259416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.259422  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.259427  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.259434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.259443  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.259452  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.259463  110500 round_trippers.go:580]     Audit-Id: e165b0e4-85ed-42f7-8b3b-16d681c452ff
	I0114 10:32:26.259579  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:26.755171  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:26.755193  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.755204  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.755212  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.757390  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:26.757416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.757426  110500 round_trippers.go:580]     Audit-Id: 20047ef4-60e6-42c1-ac1d-2be32965f108
	I0114 10:32:26.757437  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.757445  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.757457  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.757470  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.757479  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.757601  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:26.757998  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:26.758011  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:26.758019  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:26.758025  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:26.759714  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:26.759735  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:26.759745  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 10:32:26.759750  110500 round_trippers.go:580]     Audit-Id: 4dee5e22-75a1-4049-a04f-14302d303af1
	I0114 10:32:26.759756  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:26.759764  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:26.759770  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:26.759779  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:26.759917  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.255456  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:27.255476  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.255485  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.255491  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.257395  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.257416  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.257422  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.257428  110500 round_trippers.go:580]     Audit-Id: 8b584991-7a59-4031-a6ee-ed36b8d982da
	I0114 10:32:27.257433  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.257438  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.257444  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.257450  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.257554  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:27.257971  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:27.257985  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.257995  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.258004  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.259540  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.259560  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.259570  110500 round_trippers.go:580]     Audit-Id: c61cc11c-4f94-4185-85ee-04dcc2eaf2c6
	I0114 10:32:27.259579  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.259588  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.259599  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.259608  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.259619  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.259770  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.755355  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:27.755375  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.755388  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.755394  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.757497  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:27.757520  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.757531  110500 round_trippers.go:580]     Audit-Id: 3ae75ec2-a8ef-4917-afc8-ef9aa3d382cd
	I0114 10:32:27.757540  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.757549  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.757558  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.757578  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.757589  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.757717  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:27.758225  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:27.758239  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:27.758250  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:27.758260  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:27.759856  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:27.759873  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:27.759881  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:27.759886  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:27.759891  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:27.759897  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 10:32:27.759904  110500 round_trippers.go:580]     Audit-Id: 2167b85d-7304-4b18-982d-1ea14fbc5a03
	I0114 10:32:27.759909  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:27.760031  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:27.760332  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:28.255596  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:28.255615  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.255623  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.255629  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.257537  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.257560  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.257570  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.257578  110500 round_trippers.go:580]     Audit-Id: 03d4795a-bb14-49fe-8c24-291b677b4317
	I0114 10:32:28.257585  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.257593  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.257602  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.257611  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.257746  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:28.258130  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:28.258143  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.258153  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.258163  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.259792  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.259811  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.259821  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.259828  110500 round_trippers.go:580]     Audit-Id: 6e41dfe8-826e-4c61-9d51-16e6b88d0c61
	I0114 10:32:28.259836  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.259845  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.259854  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.259864  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.260018  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:28.755686  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:28.755715  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.755727  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.755738  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.757862  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:28.757882  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.757892  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.757899  110500 round_trippers.go:580]     Audit-Id: 09283aa4-913b-44ca-ac73-1a2c219fa6d2
	I0114 10:32:28.757907  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.757916  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.757924  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.757940  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.758111  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:28.758529  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:28.758541  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:28.758552  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:28.758561  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:28.760208  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:28.760232  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:28.760243  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:28.760252  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:28.760262  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:28.760275  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:28.760280  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 10:32:28.760286  110500 round_trippers.go:580]     Audit-Id: 79e70e8c-2b2b-460d-8f0d-0a3d61924cb6
	I0114 10:32:28.760370  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:29.254944  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:29.254966  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.254974  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.254980  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.256978  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.256998  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.257007  110500 round_trippers.go:580]     Audit-Id: 6e8024a5-8df6-485d-a1cc-c9a6aaec52b9
	I0114 10:32:29.257018  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.257029  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.257036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.257045  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.257060  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.257165  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:29.257549  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:29.257561  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.257569  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.257575  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.259222  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.259239  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.259248  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.259256  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.259264  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.259273  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.259286  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.259299  110500 round_trippers.go:580]     Audit-Id: e5165d7a-cea8-4461-96bf-7372805a0bad
	I0114 10:32:29.259430  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:29.755550  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:29.755572  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.755582  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.755591  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.757508  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.757531  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.757541  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.757549  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.757556  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.757565  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.757577  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.757590  110500 round_trippers.go:580]     Audit-Id: 165f76d6-f2b4-485c-a16c-536de0a4d900
	I0114 10:32:29.757704  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:29.758097  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:29.758111  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:29.758121  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:29.758130  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:29.759815  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:29.759840  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:29.759851  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 10:32:29.759860  110500 round_trippers.go:580]     Audit-Id: 2334a4fc-ab40-46b8-9a67-f8c6ee0a221f
	I0114 10:32:29.759868  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:29.759881  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:29.759890  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:29.759901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:29.760019  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:30.255612  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:30.255638  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.255649  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.255657  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.257668  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.257686  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.257694  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.257700  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.257705  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.257713  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.257721  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.257739  110500 round_trippers.go:580]     Audit-Id: 8b29845a-1bac-4410-8ec6-f6d50426573e
	I0114 10:32:30.257848  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:30.258249  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:30.258262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.258269  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.258277  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.260013  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.260035  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.260046  110500 round_trippers.go:580]     Audit-Id: da3743d4-cd4c-437a-b1cc-e53d3ce1d217
	I0114 10:32:30.260055  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.260069  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.260078  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.260090  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.260101  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.260216  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:30.260612  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:30.754777  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:30.754797  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.754807  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.754814  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.756907  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:30.756932  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.756943  110500 round_trippers.go:580]     Audit-Id: 0e68a683-03ba-4f20-9b66-357e1ebd6f7a
	I0114 10:32:30.756952  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.756964  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.756975  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.756985  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.756997  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.757115  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:30.757633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:30.757652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:30.757663  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:30.757674  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:30.759328  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:30.759350  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:30.759360  110500 round_trippers.go:580]     Audit-Id: 79b5c42f-d5ee-473f-9ef2-7ddbd23be82b
	I0114 10:32:30.759369  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:30.759380  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:30.759393  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:30.759402  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:30.759415  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:30 GMT
	I0114 10:32:30.759552  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:31.255085  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:31.255105  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.255125  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.255133  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.257142  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.257166  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.257173  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.257179  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.257188  110500 round_trippers.go:580]     Audit-Id: 5b02cf04-91ee-4c3f-b2b9-a589aad94bae
	I0114 10:32:31.257196  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.257207  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.257224  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.257348  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:31.257765  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:31.257780  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.257791  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.257808  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.259361  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.259385  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.259394  110500 round_trippers.go:580]     Audit-Id: 7c3adabb-9744-4a02-b2bc-e2aec5b89a83
	I0114 10:32:31.259403  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.259412  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.259424  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.259435  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.259447  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.259532  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:31.755109  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:31.755130  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.755138  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.755145  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.757374  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:31.757401  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.757411  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.757418  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.757425  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.757432  110500 round_trippers.go:580]     Audit-Id: c0266278-a5dd-4e1b-af76-c45306fd69fe
	I0114 10:32:31.757440  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.757450  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.757613  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:31.757997  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:31.758009  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:31.758016  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:31.758022  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:31.759699  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:31.759725  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:31.759735  110500 round_trippers.go:580]     Audit-Id: 65120ef6-edb7-4836-9391-4ac8e7c2ed70
	I0114 10:32:31.759745  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:31.759758  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:31.759770  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:31.759783  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:31.759792  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:31 GMT
	I0114 10:32:31.759887  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.255485  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:32.255506  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.255517  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.255525  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.257491  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.257519  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.257530  110500 round_trippers.go:580]     Audit-Id: f09b3877-d0df-4034-8fb4-90ce1a1bd2de
	I0114 10:32:32.257540  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.257552  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.257561  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.257573  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.257579  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.257716  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:32.258163  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:32.258178  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.258185  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.258192  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.259808  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.259830  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.259840  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.259849  110500 round_trippers.go:580]     Audit-Id: 6127838b-d1d4-40db-901d-1116a8eeaaae
	I0114 10:32:32.259862  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.259871  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.259884  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.259893  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.260004  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.755587  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:32.755609  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.755618  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.755625  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.757750  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:32.757772  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.757782  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.757791  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.757801  110500 round_trippers.go:580]     Audit-Id: fd727de4-8b43-4307-abce-74fef66a240a
	I0114 10:32:32.757812  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.757824  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.757833  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.757927  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:32.758367  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:32.758380  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:32.758387  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:32.758394  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:32.760252  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:32.760275  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:32.760285  110500 round_trippers.go:580]     Audit-Id: 1ba406ab-1408-47f3-9f7b-d83a23d1d995
	I0114 10:32:32.760294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:32.760303  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:32.760311  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:32.760320  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:32.760333  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:32 GMT
	I0114 10:32:32.760453  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:32.760759  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:33.255183  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:33.255203  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.255216  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.255227  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.256935  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:33.256955  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.256965  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.256974  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.256983  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.256997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.257006  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.257015  110500 round_trippers.go:580]     Audit-Id: f707a20e-84c0-4220-8de1-a410d53bbbd2
	I0114 10:32:33.257114  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:33.257633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:33.257652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.257664  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.257673  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.261003  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:33.261025  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.261038  110500 round_trippers.go:580]     Audit-Id: 55c5ec23-8f33-46de-a5f1-ca14186a4547
	I0114 10:32:33.261047  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.261055  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.261067  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.261079  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.261089  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.261197  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:33.754764  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:33.754788  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.754796  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.754802  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.756951  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:33.756974  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.756990  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.756999  110500 round_trippers.go:580]     Audit-Id: 3b0d4e77-45b8-44ee-b4f3-262610cdf21f
	I0114 10:32:33.757012  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.757024  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.757036  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.757049  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.757166  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:33.757602  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:33.757615  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:33.757622  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:33.757628  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:33.759206  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:33.759227  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:33.759236  110500 round_trippers.go:580]     Audit-Id: bb4508cd-9c1a-4d46-a92a-ce006889479a
	I0114 10:32:33.759246  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:33.759255  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:33.759267  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:33.759279  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:33.759292  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:33 GMT
	I0114 10:32:33.759396  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:34.254745  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:34.254766  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.254774  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.254780  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.256865  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:34.256890  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.256901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.256910  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.256918  110500 round_trippers.go:580]     Audit-Id: 19099497-60b0-4de7-a3a8-250a4c3230ae
	I0114 10:32:34.256927  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.256937  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.256949  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.257056  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:34.257478  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:34.257492  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.257500  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.257507  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.259093  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:34.259116  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.259125  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.259135  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.259147  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.259156  110500 round_trippers.go:580]     Audit-Id: 652cb597-28f1-4a27-a56c-a6bd5d19765f
	I0114 10:32:34.259168  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.259179  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.259278  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:34.754848  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:34.754881  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.754890  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.754897  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.757014  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:34.757041  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.757052  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.757061  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.757067  110500 round_trippers.go:580]     Audit-Id: c80617eb-7d4c-4c6e-9079-9c6bcb1f5c04
	I0114 10:32:34.757074  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.757083  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.757093  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.757298  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:34.757687  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:34.757699  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:34.757707  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:34.757713  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:34.759337  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:34.759354  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:34.759360  110500 round_trippers.go:580]     Audit-Id: 7b4eee6e-6983-4da2-a62a-b725116b5647
	I0114 10:32:34.759366  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:34.759371  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:34.759379  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:34.759388  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:34.759399  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:34 GMT
	I0114 10:32:34.759513  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:35.254779  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:35.254804  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.254816  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.254825  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.256971  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:35.256996  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.257004  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.257010  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.257016  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.257025  110500 round_trippers.go:580]     Audit-Id: 0d3539bd-6778-49cc-96e9-9bad1309f553
	I0114 10:32:35.257033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.257045  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.257162  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:35.257610  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:35.257623  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.257631  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.257637  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.259224  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:35.259240  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.259247  110500 round_trippers.go:580]     Audit-Id: 0ae81ca5-2444-4e02-9a52-f90078681427
	I0114 10:32:35.259255  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.259263  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.259275  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.259288  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.259299  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.259411  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:35.259731  110500 pod_ready.go:102] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:35.754968  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:35.754991  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.754999  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.755005  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.757108  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:35.757134  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.757141  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.757147  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.757153  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.757158  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.757164  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.757169  110500 round_trippers.go:580]     Audit-Id: 346069ca-82c5-48d2-9563-9b2ddbf48dc1
	I0114 10:32:35.757280  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"682","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6259 chars]
	I0114 10:32:35.757685  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:35.757698  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:35.757706  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:35.757712  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:35.759339  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:35.759361  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:35.759371  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:35 GMT
	I0114 10:32:35.759380  110500 round_trippers.go:580]     Audit-Id: 4f47e4ad-f84c-4e00-b267-67aaa09518dc
	I0114 10:32:35.759390  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:35.759403  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:35.759408  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:35.759420  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:35.759543  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.255483  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:36.255502  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.255510  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.255516  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.257488  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.257509  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.257516  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.257521  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.257527  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.257532  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.257537  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.257542  110500 round_trippers.go:580]     Audit-Id: 686cc3bb-197f-4b71-8669-c9c811866ccb
	I0114 10:32:36.257662  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"777","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6035 chars]
	I0114 10:32:36.258125  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.258140  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.258151  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.258161  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.259632  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.259648  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.259654  110500 round_trippers.go:580]     Audit-Id: 88ae3e7f-6045-49d8-bb2d-357c67401973
	I0114 10:32:36.259660  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.259667  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.259691  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.259703  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.259722  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.259868  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.260159  110500 pod_ready.go:92] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.260181  110500 pod_ready.go:81] duration metric: took 12.509495988s waiting for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.260205  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.260261  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102822
	I0114 10:32:36.260270  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.260282  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.260297  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.261791  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.261809  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.261818  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.261826  110500 round_trippers.go:580]     Audit-Id: 5634a153-6fb5-404f-a137-f53afacc1245
	I0114 10:32:36.261834  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.261846  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.261855  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.261869  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.262026  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102822","namespace":"kube-system","uid":"c74a88c9-d603-4a80-a194-de75c8d0a3a5","resourceVersion":"770","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.mirror":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.seen":"2023-01-14T10:28:42.123458577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8421 chars]
	I0114 10:32:36.262461  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.262472  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.262479  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.262487  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.263929  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.263945  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.263954  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.263962  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.263970  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.263982  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.263995  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.264005  110500 round_trippers.go:580]     Audit-Id: e55d109e-705e-470a-9744-b5583c449686
	I0114 10:32:36.264139  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.264398  110500 pod_ready.go:92] pod "kube-apiserver-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.264408  110500 pod_ready.go:81] duration metric: took 4.192145ms waiting for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.264418  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.264453  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102822
	I0114 10:32:36.264460  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.264467  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.264474  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.266038  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.266055  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.266064  110500 round_trippers.go:580]     Audit-Id: 47324d68-80de-4990-a022-d7d52a3fcbf0
	I0114 10:32:36.266071  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.266079  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.266088  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.266097  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.266107  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.266215  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102822","namespace":"kube-system","uid":"85c8a264-96f3-4fcf-affd-917b94bdd177","resourceVersion":"774","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.mirror":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.seen":"2023-01-14T10:28:42.123460297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7996 chars]
	I0114 10:32:36.266578  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.266591  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.266601  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.266611  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.267963  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.267978  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.267984  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.267990  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.267996  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.268005  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.268017  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.268028  110500 round_trippers.go:580]     Audit-Id: 93002af6-97df-471f-95fa-3d5e668e2fca
	I0114 10:32:36.268120  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.268373  110500 pod_ready.go:92] pod "kube-controller-manager-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.268384  110500 pod_ready.go:81] duration metric: took 3.959272ms waiting for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.268394  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.268427  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d5n6
	I0114 10:32:36.268434  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.268441  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.268447  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.269931  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.269946  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.269953  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.269959  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.269964  110500 round_trippers.go:580]     Audit-Id: 51ab0589-c66b-4eab-b16d-8834f2151d9a
	I0114 10:32:36.269972  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.269981  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.269997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.270077  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4d5n6","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dba561b-e827-4a6e-afd9-11c68b7e4447","resourceVersion":"471","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5522 chars]
	I0114 10:32:36.270392  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:32:36.270402  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.270408  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.270414  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.271787  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.271805  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.271811  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.271817  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.271823  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.271828  110500 round_trippers.go:580]     Audit-Id: 126c3c4b-b18f-441c-b869-90363ea3dee2
	I0114 10:32:36.271833  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.271838  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.271935  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4","resourceVersion":"549","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1" [truncated 4430 chars]
	I0114 10:32:36.272154  110500 pod_ready.go:92] pod "kube-proxy-4d5n6" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.272165  110500 pod_ready.go:81] duration metric: took 3.765849ms waiting for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.272172  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.272210  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:36.272218  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.272224  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.272230  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.273735  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.273750  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.273760  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.273769  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.273784  110500 round_trippers.go:580]     Audit-Id: e57ac6dc-00b8-4e56-8601-76f0d7bbb22c
	I0114 10:32:36.273797  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.273809  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.273819  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.273928  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bzd24","generateName":"kube-proxy-","namespace":"kube-system","uid":"3191f786-4823-486a-90e6-be1b1180c23a","resourceVersion":"660","creationTimestamp":"2023-01-14T10:30:20Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 10:32:36.274311  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:36.274329  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.274336  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.274342  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.275632  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:36.275645  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.275652  110500 round_trippers.go:580]     Audit-Id: 8bcc2f0b-ac50-4161-9c26-9bb0097ebfb8
	I0114 10:32:36.275657  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.275663  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.275668  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.275697  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.275708  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.275877  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m03","uid":"7fb9d125-0cce-4853-b9a9-9348e20e7ae7","resourceVersion":"674","creationTimestamp":"2023-01-14T10:31:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu
mes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{"." [truncated 4248 chars]
	I0114 10:32:36.276181  110500 pod_ready.go:92] pod "kube-proxy-bzd24" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.276195  110500 pod_ready.go:81] duration metric: took 4.017618ms waiting for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.276205  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.455534  110500 request.go:614] Waited for 179.275116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:36.455598  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:36.455602  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.455629  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.455639  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.457708  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.457736  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.457747  110500 round_trippers.go:580]     Audit-Id: 54c225ad-ae45-474b-b73c-0a4296e75b17
	I0114 10:32:36.457756  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.457763  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.457775  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.457794  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.457803  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.457939  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlcll","generateName":"kube-proxy-","namespace":"kube-system","uid":"91e05737-5cbf-404c-8b7c-75045f584885","resourceVersion":"718","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5727 chars]
	I0114 10:32:36.655705  110500 request.go:614] Waited for 197.282889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.655766  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:36.655771  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.655779  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.655786  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.658009  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.658030  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.658044  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.658052  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.658061  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.658068  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.658077  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.658088  110500 round_trippers.go:580]     Audit-Id: 9962cd09-553e-4e94-9f81-8a21b65473fa
	I0114 10:32:36.658176  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:36.658482  110500 pod_ready.go:92] pod "kube-proxy-qlcll" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:36.658493  110500 pod_ready.go:81] duration metric: took 382.279914ms waiting for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.658501  110500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:36.855574  110500 request.go:614] Waited for 196.992212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:36.855633  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:36.855641  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:36.855652  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:36.855660  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:36.857870  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:36.857889  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:36.857896  110500 round_trippers.go:580]     Audit-Id: 9d2c7405-2f70-45e4-b3b5-c264d1b3fc4f
	I0114 10:32:36.857902  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:36.857907  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:36.857913  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:36.857918  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:36.857924  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:36 GMT
	I0114 10:32:36.858061  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102822","namespace":"kube-system","uid":"63ee442e-88de-44d9-8512-98c56f1b4942","resourceVersion":"725","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.mirror":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.seen":"2023-01-14T10:28:42.123461701Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4878 chars]
	I0114 10:32:37.055720  110500 request.go:614] Waited for 197.266354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.055769  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.055774  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.055787  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.055797  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.057958  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.057979  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.057988  110500 round_trippers.go:580]     Audit-Id: 1580fd75-e4e0-4a4a-9791-aff04e65f15c
	I0114 10:32:37.057993  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.058002  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.058008  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.058015  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.058021  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.058137  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:37.058454  110500 pod_ready.go:92] pod "kube-scheduler-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:37.058468  110500 pod_ready.go:81] duration metric: took 399.960714ms waiting for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:37.058477  110500 pod_ready.go:38] duration metric: took 13.317646399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:37.058494  110500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:32:37.065465  110500 command_runner.go:130] > -16
	I0114 10:32:37.065504  110500 ops.go:34] apiserver oom_adj: -16
	I0114 10:32:37.065514  110500 kubeadm.go:631] restartCluster took 23.987780678s
	I0114 10:32:37.065526  110500 kubeadm.go:398] StartCluster complete in 24.028830611s
	I0114 10:32:37.065550  110500 settings.go:142] acquiring lock: {Name:mk1c1a895c03873155a8c7da5f3762b351f9952d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:37.065670  110500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.066259  110500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:32:37.066720  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.066964  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:37.067294  110500 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0114 10:32:37.067309  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.067324  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.067333  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.069540  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.069555  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.069562  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.069567  110500 round_trippers.go:580]     Content-Length: 291
	I0114 10:32:37.069573  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.069578  110500 round_trippers.go:580]     Audit-Id: 72d12c09-7a9f-482a-ba0b-2b59f789418c
	I0114 10:32:37.069583  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.069588  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.069594  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.069612  110500 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a0ae11c7-3256-4ef8-a0cd-ff11f2de358a","resourceVersion":"753","creationTimestamp":"2023-01-14T10:28:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0114 10:32:37.069762  110500 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-102822" rescaled to 1
	I0114 10:32:37.069822  110500 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:32:37.072149  110500 out.go:177] * Verifying Kubernetes components...
	I0114 10:32:37.069850  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 10:32:37.069869  110500 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0114 10:32:37.070096  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:37.073566  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:32:37.073588  110500 addons.go:65] Setting storage-provisioner=true in profile "multinode-102822"
	I0114 10:32:37.073605  110500 addons.go:65] Setting default-storageclass=true in profile "multinode-102822"
	I0114 10:32:37.073612  110500 addons.go:227] Setting addon storage-provisioner=true in "multinode-102822"
	W0114 10:32:37.073620  110500 addons.go:236] addon storage-provisioner should already be in state true
	I0114 10:32:37.073683  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:32:37.073623  110500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-102822"
	I0114 10:32:37.073995  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.074114  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.083738  110500 node_ready.go:35] waiting up to 6m0s for node "multinode-102822" to be "Ready" ...
	I0114 10:32:37.101957  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:32:37.102248  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:32:37.104720  110500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:32:37.102705  110500 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0114 10:32:37.106493  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.106510  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.106521  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.106628  110500 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:32:37.106646  110500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 10:32:37.106697  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:37.108695  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.108732  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.108744  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.108754  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.108763  110500 round_trippers.go:580]     Content-Length: 1273
	I0114 10:32:37.108775  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.108784  110500 round_trippers.go:580]     Audit-Id: 0ea79256-b1ab-4ac5-8466-82be87c881b8
	I0114 10:32:37.108794  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.108800  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.108831  110500 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0114 10:32:37.109300  110500 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 10:32:37.109347  110500 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0114 10:32:37.109351  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.109359  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.109368  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.109374  110500 round_trippers.go:473]     Content-Type: application/json
	I0114 10:32:37.112948  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:37.112965  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.112974  110500 round_trippers.go:580]     Audit-Id: bc58afbd-603f-4591-8a19-d9db28fda25c
	I0114 10:32:37.112983  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.112992  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.113004  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.113014  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.113024  110500 round_trippers.go:580]     Content-Length: 1220
	I0114 10:32:37.113030  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.113076  110500 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1d09530-1288-4dab-ac57-5d9b43804c97","resourceVersion":"378","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 10:32:37.113223  110500 addons.go:227] Setting addon default-storageclass=true in "multinode-102822"
	W0114 10:32:37.113242  110500 addons.go:236] addon default-storageclass should already be in state true
	I0114 10:32:37.113268  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:32:37.113633  110500 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:32:37.134182  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:37.140584  110500 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 10:32:37.140618  110500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 10:32:37.140684  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:32:37.142257  110500 command_runner.go:130] > apiVersion: v1
	I0114 10:32:37.142279  110500 command_runner.go:130] > data:
	I0114 10:32:37.142286  110500 command_runner.go:130] >   Corefile: |
	I0114 10:32:37.142291  110500 command_runner.go:130] >     .:53 {
	I0114 10:32:37.142298  110500 command_runner.go:130] >         errors
	I0114 10:32:37.142306  110500 command_runner.go:130] >         health {
	I0114 10:32:37.142312  110500 command_runner.go:130] >            lameduck 5s
	I0114 10:32:37.142318  110500 command_runner.go:130] >         }
	I0114 10:32:37.142329  110500 command_runner.go:130] >         ready
	I0114 10:32:37.142340  110500 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0114 10:32:37.142350  110500 command_runner.go:130] >            pods insecure
	I0114 10:32:37.142361  110500 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0114 10:32:37.142372  110500 command_runner.go:130] >            ttl 30
	I0114 10:32:37.142378  110500 command_runner.go:130] >         }
	I0114 10:32:37.142383  110500 command_runner.go:130] >         prometheus :9153
	I0114 10:32:37.142396  110500 command_runner.go:130] >         hosts {
	I0114 10:32:37.142408  110500 command_runner.go:130] >            192.168.58.1 host.minikube.internal
	I0114 10:32:37.142415  110500 command_runner.go:130] >            fallthrough
	I0114 10:32:37.142424  110500 command_runner.go:130] >         }
	I0114 10:32:37.142432  110500 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0114 10:32:37.142443  110500 command_runner.go:130] >            max_concurrent 1000
	I0114 10:32:37.142452  110500 command_runner.go:130] >         }
	I0114 10:32:37.142458  110500 command_runner.go:130] >         cache 30
	I0114 10:32:37.142467  110500 command_runner.go:130] >         loop
	I0114 10:32:37.142476  110500 command_runner.go:130] >         reload
	I0114 10:32:37.142486  110500 command_runner.go:130] >         loadbalance
	I0114 10:32:37.142492  110500 command_runner.go:130] >     }
	I0114 10:32:37.142501  110500 command_runner.go:130] > kind: ConfigMap
	I0114 10:32:37.142507  110500 command_runner.go:130] > metadata:
	I0114 10:32:37.142520  110500 command_runner.go:130] >   creationTimestamp: "2023-01-14T10:28:42Z"
	I0114 10:32:37.142530  110500 command_runner.go:130] >   name: coredns
	I0114 10:32:37.142540  110500 command_runner.go:130] >   namespace: kube-system
	I0114 10:32:37.142549  110500 command_runner.go:130] >   resourceVersion: "369"
	I0114 10:32:37.142554  110500 command_runner.go:130] >   uid: 348659ae-af6c-4ae1-ba1c-2468636d5cd9
	I0114 10:32:37.142667  110500 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 10:32:37.166923  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:32:37.232895  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:32:37.255727  110500 request.go:614] Waited for 171.896659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.255799  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.255808  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.255816  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.255826  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.257966  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.257995  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.258005  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.258014  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.258024  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.258037  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.258048  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.258056  110500 round_trippers.go:580]     Audit-Id: 26a9ffda-d3c6-41ff-9b64-02b9f68339e0
	I0114 10:32:37.258212  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:37.258658  110500 node_ready.go:49] node "multinode-102822" has status "Ready":"True"
	I0114 10:32:37.258684  110500 node_ready.go:38] duration metric: took 174.906774ms waiting for node "multinode-102822" to be "Ready" ...
	I0114 10:32:37.258695  110500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:37.261672  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 10:32:37.455739  110500 request.go:614] Waited for 196.934939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:37.455812  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:37.455819  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.455845  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.455855  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.459944  110500 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 10:32:37.459977  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.459987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.459996  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.460006  110500 round_trippers.go:580]     Audit-Id: 9ef79377-2b98-4dcf-b71e-74e70cc74bad
	I0114 10:32:37.460014  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.460024  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.460035  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.461304  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84721 chars]
	I0114 10:32:37.465016  110500 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:37.474891  110500 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0114 10:32:37.476850  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0114 10:32:37.478813  110500 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 10:32:37.480672  110500 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 10:32:37.521681  110500 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0114 10:32:37.531444  110500 command_runner.go:130] > pod/storage-provisioner configured
	I0114 10:32:37.535193  110500 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0114 10:32:37.537991  110500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 10:32:37.539317  110500 addons.go:488] enableAddons completed in 469.451083ms
	I0114 10:32:37.656457  110500 request.go:614] Waited for 191.361178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:37.656516  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:37.656521  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.656528  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.656535  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.658982  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.659003  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.659010  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.659016  110500 round_trippers.go:580]     Audit-Id: 594f46bb-9c2a-47db-b0bd-2919bd22e370
	I0114 10:32:37.659022  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.659028  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.659035  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.659043  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.659176  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:37.855996  110500 request.go:614] Waited for 196.354517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.856061  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:37.856072  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:37.856083  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:37.856096  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:37.858291  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:37.858314  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:37.858321  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:37.858327  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:37.858332  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:37.858337  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 10:32:37.858343  110500 round_trippers.go:580]     Audit-Id: 5b4e38d1-702e-4a2c-b31b-d2ebda836842
	I0114 10:32:37.858350  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:37.858523  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:38.359596  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:38.359618  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.359626  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.359633  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.361829  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:38.361851  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.361862  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.361871  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.361880  110500 round_trippers.go:580]     Audit-Id: 4553d3a6-d5cb-414b-8f19-6e8030cb3318
	I0114 10:32:38.361891  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.361901  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.361912  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.362088  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:38.362557  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:38.362569  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.362576  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.362582  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.364247  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:38.364267  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.364277  110500 round_trippers.go:580]     Audit-Id: 8e182689-b157-467d-aa0f-9d9b888d9608
	I0114 10:32:38.364294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.364306  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.364321  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.364331  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.364343  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.364481  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:38.859969  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:38.859993  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.860001  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.860007  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.862084  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:38.862106  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.862116  110500 round_trippers.go:580]     Audit-Id: b5d68d28-294d-4edc-aed1-a7efefc5a6a7
	I0114 10:32:38.862124  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.862131  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.862138  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.862147  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.862156  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.862306  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:38.862890  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:38.862906  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:38.862915  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:38.862922  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:38.864596  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:38.864618  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:38.864628  110500 round_trippers.go:580]     Audit-Id: 97ff09cc-b23a-4680-b752-7f9598de1f65
	I0114 10:32:38.864635  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:38.864640  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:38.864648  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:38.864654  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:38.864660  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:38 GMT
	I0114 10:32:38.864836  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.359064  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:39.359088  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.359097  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.359104  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.361396  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:39.361421  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.361433  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.361442  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.361452  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.361468  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.361477  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.361488  110500 round_trippers.go:580]     Audit-Id: 81698645-fa16-4748-8e8b-746b7500c0b0
	I0114 10:32:39.361597  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:39.362018  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:39.362029  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.362036  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.362042  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.363706  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:39.363724  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.363731  110500 round_trippers.go:580]     Audit-Id: 06179dbb-c491-4fdf-9b46-8b57c30a2a02
	I0114 10:32:39.363736  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.363743  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.363751  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.363762  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.363775  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.363902  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.859598  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:39.859624  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.859633  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.859639  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.861841  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:39.861862  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.861869  110500 round_trippers.go:580]     Audit-Id: 5283f801-29b4-4678-bdb3-c59dd8c322ae
	I0114 10:32:39.861875  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.861891  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.861901  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.861915  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.861924  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.862110  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:39.862591  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:39.862647  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:39.862666  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:39.862677  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:39.864481  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:39.864498  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:39.864508  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:39.864516  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:39.864524  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:39.864533  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:39.864547  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:39 GMT
	I0114 10:32:39.864556  110500 round_trippers.go:580]     Audit-Id: 55799b22-9268-4877-b94b-a33177d8cdeb
	I0114 10:32:39.864734  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:39.865116  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:40.359224  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:40.359246  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.359254  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.359261  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.361570  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:40.361599  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.361606  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.361613  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.361619  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.361625  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.361633  110500 round_trippers.go:580]     Audit-Id: fab59685-da79-42f1-9658-97a80bf226a9
	I0114 10:32:40.361638  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.361768  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:40.362231  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:40.362245  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.362253  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.362259  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.364061  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.364077  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.364084  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.364091  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.364100  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.364111  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.364120  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.364126  110500 round_trippers.go:580]     Audit-Id: 43f7e97f-897f-4c5b-b9e7-c2e06b9b42f4
	I0114 10:32:40.364245  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:40.859884  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:40.859905  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.859913  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.859919  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.861912  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.861938  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.861949  110500 round_trippers.go:580]     Audit-Id: 2c9c969f-0642-4f26-bf0a-d2e8bc6a68ed
	I0114 10:32:40.861959  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.861969  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.861978  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.861990  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.862001  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.862133  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:40.862577  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:40.862590  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:40.862597  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:40.862605  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:40.864355  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:40.864375  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:40.864385  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:40.864393  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:40 GMT
	I0114 10:32:40.864406  110500 round_trippers.go:580]     Audit-Id: 1820abbb-3ffb-4314-afa4-4789a3f8b5fb
	I0114 10:32:40.864412  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:40.864419  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:40.864425  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:40.864542  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:41.359096  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:41.359121  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.359132  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.359141  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.361299  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.361321  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.361328  110500 round_trippers.go:580]     Audit-Id: 034170e7-b7e3-4243-8ab4-133db6e98d26
	I0114 10:32:41.361334  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.361340  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.361346  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.361367  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.361375  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.361529  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:41.361977  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:41.361989  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.361996  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.362006  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.364080  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.364105  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.364114  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.364123  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.364130  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.364137  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.364145  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.364158  110500 round_trippers.go:580]     Audit-Id: 6c30d9ea-d002-4067-b3f3-a45d3334319b
	I0114 10:32:41.364280  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:41.859939  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:41.859959  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.859967  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.859974  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.862111  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:41.862128  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.862135  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.862140  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.862145  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.862151  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.862156  110500 round_trippers.go:580]     Audit-Id: 5fab01c3-76e3-4314-902a-ce2b17e158b7
	I0114 10:32:41.862161  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.862272  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:41.862698  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:41.862709  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:41.862716  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:41.862722  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:41.864461  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:41.864481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:41.864492  110500 round_trippers.go:580]     Audit-Id: 70012846-9522-4e4a-b077-d768ada29a5c
	I0114 10:32:41.864501  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:41.864509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:41.864521  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:41.864529  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:41.864538  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:41 GMT
	I0114 10:32:41.864653  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:42.359169  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:42.359192  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.359200  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.359207  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.361441  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:42.361465  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.361475  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.361483  110500 round_trippers.go:580]     Audit-Id: 5c90d803-398e-4d0e-b154-3406a73293ce
	I0114 10:32:42.361492  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.361500  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.361509  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.361521  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.361656  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:42.362144  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:42.362159  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.362166  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.362172  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.363986  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:42.364008  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.364019  110500 round_trippers.go:580]     Audit-Id: 28a92848-4048-4824-9116-41fca3477677
	I0114 10:32:42.364031  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.364042  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.364055  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.364067  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.364078  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.364250  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:42.364584  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:42.859846  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:42.859875  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.859883  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.859890  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.862342  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:42.862362  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.862370  110500 round_trippers.go:580]     Audit-Id: 78bdb26b-a666-4f1d-893c-57b64da2bd73
	I0114 10:32:42.862375  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.862381  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.862386  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.862392  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.862397  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.862507  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:42.862938  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:42.862949  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:42.862956  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:42.862963  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:42.864801  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:42.864822  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:42.864831  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:42.864840  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:42.864849  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:42 GMT
	I0114 10:32:42.864861  110500 round_trippers.go:580]     Audit-Id: 158eefbb-c63f-4fc0-ae90-e23ca6843f48
	I0114 10:32:42.864877  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:42.864887  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:42.864999  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:43.359803  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:43.359828  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.359837  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.359843  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.361988  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:43.362007  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.362014  110500 round_trippers.go:580]     Audit-Id: f05e6b2e-be63-42aa-bf95-7b57c23f420d
	I0114 10:32:43.362020  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.362025  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.362034  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.362039  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.362047  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.362144  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:43.362591  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:43.362603  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.362611  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.362618  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.364316  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:43.364337  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.364347  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.364356  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.364369  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.364378  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.364389  110500 round_trippers.go:580]     Audit-Id: 47c3825f-c4f2-4038-b9c9-ee937c9f14c3
	I0114 10:32:43.364401  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.364527  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:43.859053  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:43.859076  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.859084  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.859090  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.861229  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:43.861248  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.861255  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.861261  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.861272  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.861284  110500 round_trippers.go:580]     Audit-Id: fbebfdc6-d1bc-48b9-b0ed-49434b5c9ab0
	I0114 10:32:43.861294  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.861301  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.861450  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:43.861923  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:43.861935  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:43.861943  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:43.861949  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:43.863768  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:43.863788  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:43.863796  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:43.863802  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:43.863807  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:43 GMT
	I0114 10:32:43.863812  110500 round_trippers.go:580]     Audit-Id: 03d3495e-a450-42d8-ab63-885b6e3ff6e9
	I0114 10:32:43.863821  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:43.863829  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:43.863951  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:44.359472  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:44.359499  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.359512  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.359518  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.362100  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:44.362134  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.362144  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.362151  110500 round_trippers.go:580]     Audit-Id: 2ecb38ea-7962-4ce4-9634-c8d41bc36023
	I0114 10:32:44.362156  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.362162  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.362171  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.362184  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.362385  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:44.363138  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:44.363160  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.363172  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.363223  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.365040  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:44.365059  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.365066  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.365071  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.365078  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.365090  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.365102  110500 round_trippers.go:580]     Audit-Id: 504ff71e-4c40-4919-915c-04e52d16b2f0
	I0114 10:32:44.365114  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.365237  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:44.365660  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:44.859846  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:44.859869  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.859880  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.859886  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.862148  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:44.862174  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.862184  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.862193  110500 round_trippers.go:580]     Audit-Id: 1aef2bd1-d91e-4c55-b201-3d8fcb31bef9
	I0114 10:32:44.862202  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.862211  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.862218  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.862227  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.862358  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:44.862889  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:44.862901  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:44.862911  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:44.862917  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:44.864803  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:44.864819  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:44.864826  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:44.864831  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:44.864837  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:44.864844  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:44.864853  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:44 GMT
	I0114 10:32:44.864861  110500 round_trippers.go:580]     Audit-Id: 916976a1-8d50-4315-b550-483b9bc9608b
	I0114 10:32:44.865027  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:45.359632  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:45.359652  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.359660  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.359677  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.361927  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.361956  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.361967  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.361976  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.361985  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.362039  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.362051  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.362058  110500 round_trippers.go:580]     Audit-Id: 46f9e573-a43c-4914-bc5c-824c57798d3a
	I0114 10:32:45.362172  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:45.362607  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:45.362618  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.362626  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.362633  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.364421  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:45.364440  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.364449  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.364457  110500 round_trippers.go:580]     Audit-Id: da79f983-cde3-4e4d-8aa4-48f23dd813de
	I0114 10:32:45.364464  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.364472  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.364481  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.364494  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.364597  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:45.859227  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:45.859266  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.859280  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.859290  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.861600  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.861624  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.861634  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.861643  110500 round_trippers.go:580]     Audit-Id: b1855bfd-9d92-4b0c-9fc4-20a77939c6d0
	I0114 10:32:45.861651  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.861659  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.861670  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.861682  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.861828  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:45.862329  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:45.862343  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:45.862350  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:45.862357  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:45.864381  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:45.864399  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:45.864406  110500 round_trippers.go:580]     Audit-Id: be0b664f-3541-48f9-af1d-471c790dcf54
	I0114 10:32:45.864412  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:45.864418  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:45.864426  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:45.864434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:45.864443  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:45 GMT
	I0114 10:32:45.864564  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.359579  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:46.359602  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.359610  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.359617  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.361864  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:46.361890  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.361901  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.361911  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.361920  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.361932  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.361944  110500 round_trippers.go:580]     Audit-Id: 4b299044-0411-4eef-8466-4e0b7f3f27ab
	I0114 10:32:46.361955  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.362100  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:46.362546  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:46.362557  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.362564  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.362573  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.364372  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:46.364392  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.364401  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.364407  110500 round_trippers.go:580]     Audit-Id: d9cac187-6e34-4ad6-8287-02ab069b2549
	I0114 10:32:46.364417  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.364431  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.364441  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.364454  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.364577  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.859167  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:46.859196  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.859204  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.859211  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.861386  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:46.861412  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.861422  110500 round_trippers.go:580]     Audit-Id: e0167e38-5bf6-4576-abdd-910b23e13cc8
	I0114 10:32:46.861431  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.861438  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.861447  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.861462  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.861470  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.861571  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:46.862026  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:46.862037  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:46.862044  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:46.862050  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:46.863759  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:46.863775  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:46.863781  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:46.863787  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:46.863793  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:46.863801  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:46.863809  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:46 GMT
	I0114 10:32:46.863817  110500 round_trippers.go:580]     Audit-Id: ca4fe996-62ad-474d-848c-ccade570ba3d
	I0114 10:32:46.863978  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:46.864330  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:47.359737  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:47.359759  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.359768  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.359774  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.362284  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:47.362308  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.362318  110500 round_trippers.go:580]     Audit-Id: 35b47af2-d322-448d-9b6f-19d6f47b8f05
	I0114 10:32:47.362327  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.362335  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.362344  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.362353  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.362366  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.362488  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:47.363062  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:47.363082  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.363092  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.363098  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.365099  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:47.365123  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.365133  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.365144  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.365153  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.365162  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.365170  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.365178  110500 round_trippers.go:580]     Audit-Id: 48f2bdf3-9d93-44a9-a63a-3148dd9812b7
	I0114 10:32:47.365278  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:47.859942  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:47.859965  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.859976  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.859984  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.862271  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:47.862297  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.862308  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.862317  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.862324  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.862329  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.862335  110500 round_trippers.go:580]     Audit-Id: 0634038b-38ea-4702-bfee-cb95338954a7
	I0114 10:32:47.862340  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.862439  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:47.862930  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:47.862946  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:47.862953  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:47.862960  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:47.864674  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:47.864737  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:47.864809  110500 round_trippers.go:580]     Audit-Id: e1d75173-0a5a-4010-a0d9-5c3a5b9e8a49
	I0114 10:32:47.864828  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:47.864834  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:47.864840  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:47.864848  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:47.864853  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:47 GMT
	I0114 10:32:47.864975  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.359325  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:48.359347  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.359359  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.359366  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.361551  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:48.361576  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.361586  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.361595  110500 round_trippers.go:580]     Audit-Id: 8ceb8c5f-cbf9-479b-a2c0-9ce1d42b4db1
	I0114 10:32:48.361604  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.361614  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.361627  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.361641  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.361766  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:48.362247  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:48.362262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.362270  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.362276  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.364040  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:48.364062  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.364079  110500 round_trippers.go:580]     Audit-Id: 44d53443-bee2-482c-b5f3-8914e7fce187
	I0114 10:32:48.364091  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.364105  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.364113  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.364123  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.364132  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.364251  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.859955  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:48.859984  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.859998  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.860008  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.862410  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:48.862438  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.862453  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.862472  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.862479  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.862487  110500 round_trippers.go:580]     Audit-Id: 959b5886-f2da-4e70-bd52-6ce100746f2e
	I0114 10:32:48.862497  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.862509  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.862637  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:48.863215  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:48.863232  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:48.863243  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:48.863253  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:48.864931  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:48.864949  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:48.864959  110500 round_trippers.go:580]     Audit-Id: 98987253-9d9e-433d-856c-fe638637ea02
	I0114 10:32:48.864969  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:48.864978  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:48.864987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:48.864997  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:48.865009  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:48 GMT
	I0114 10:32:48.865115  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:48.865421  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:49.359720  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:49.359745  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.359755  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.359766  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.361934  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:49.361955  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.361962  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.361968  110500 round_trippers.go:580]     Audit-Id: 8c97b06b-cb60-4069-83b5-4dee919ddacb
	I0114 10:32:49.361973  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.361979  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.361984  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.361994  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.362122  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:49.362693  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:49.362710  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.362721  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.362731  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.364448  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:49.364474  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.364484  110500 round_trippers.go:580]     Audit-Id: c3b4d31f-3c42-413f-a50e-7006e7195737
	I0114 10:32:49.364491  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.364497  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.364506  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.364511  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.364519  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.364634  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:49.859107  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:49.859143  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.859156  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.859166  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.861351  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:49.861376  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.861387  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.861396  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.861405  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.861413  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.861423  110500 round_trippers.go:580]     Audit-Id: acf80202-d32e-427f-9905-65e7900c3476
	I0114 10:32:49.861430  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.861524  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:49.862006  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:49.862021  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:49.862033  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:49.862045  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:49.863792  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:49.863808  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:49.863814  110500 round_trippers.go:580]     Audit-Id: 43d97ecd-c9a3-4755-ba76-b682bd120b9a
	I0114 10:32:49.863819  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:49.863824  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:49.863830  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:49.863835  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:49.863843  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:49 GMT
	I0114 10:32:49.863918  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:50.359575  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:50.359599  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.359607  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.359614  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.361914  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:50.361936  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.361944  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.361949  110500 round_trippers.go:580]     Audit-Id: 4e315205-0240-45dd-a4d4-efa0c059b803
	I0114 10:32:50.361955  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.361960  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.361968  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.361974  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.362080  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:50.362668  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:50.362684  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.362695  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.362711  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.364414  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:50.364438  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.364449  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.364458  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.364476  110500 round_trippers.go:580]     Audit-Id: ba0c18f4-30aa-46b3-b9be-ce1e31c041f9
	I0114 10:32:50.364482  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.364488  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.364493  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.364600  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:50.859237  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:50.859262  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.859271  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.859277  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.861712  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:50.861744  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.861756  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.861766  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.861776  110500 round_trippers.go:580]     Audit-Id: 7dc14a15-e343-4565-bb6d-eaa7202a8b3f
	I0114 10:32:50.861781  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.861787  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.861792  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.861963  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:50.862594  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:50.862612  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:50.862624  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:50.862634  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:50.864616  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:50.864643  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:50.864650  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:50.864656  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:50.864661  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:50.864666  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:50.864671  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:50 GMT
	I0114 10:32:50.864676  110500 round_trippers.go:580]     Audit-Id: 777e0a1b-ac51-4e60-9cf8-245b5a0d6267
	I0114 10:32:50.864792  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:51.359157  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:51.359192  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.359201  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.359208  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.361393  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:51.361418  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.361428  110500 round_trippers.go:580]     Audit-Id: 727ea24c-1766-4936-b762-2c67365137af
	I0114 10:32:51.361436  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.361444  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.361457  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.361466  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.361476  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.361596  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:51.362079  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:51.362092  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.362102  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.362111  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.363956  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:51.363980  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.363990  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.363999  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.364012  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.364021  110500 round_trippers.go:580]     Audit-Id: 92c74176-28e0-41c0-ad03-3dd4ad01620d
	I0114 10:32:51.364033  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.364041  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.364146  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:51.364451  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:51.859788  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:51.859810  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.859819  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.859826  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.862051  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:51.862083  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.862098  110500 round_trippers.go:580]     Audit-Id: 5f8c387a-66e7-4893-b486-51686e6f4c1b
	I0114 10:32:51.862108  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.862119  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.862134  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.862143  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.862156  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.862263  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:51.862710  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:51.862724  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:51.862745  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:51.862756  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:51.864464  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:51.864481  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:51.864487  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:51.864493  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:51.864498  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:51.864503  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:51 GMT
	I0114 10:32:51.864508  110500 round_trippers.go:580]     Audit-Id: fbbfa460-93b1-4017-b7db-12d7f5dd096a
	I0114 10:32:51.864515  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:51.864633  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:52.359184  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:52.359216  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.359224  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.359231  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.361569  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:52.361590  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.361596  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.361602  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.361608  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.361617  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.361624  110500 round_trippers.go:580]     Audit-Id: 46cd87ca-c92a-40e7-9abc-58fb1d1da845
	I0114 10:32:52.361633  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.361785  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:52.362272  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:52.362293  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.362300  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.362306  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.364201  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:52.364225  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.364240  110500 round_trippers.go:580]     Audit-Id: 3bfff669-7a3d-4f5a-ba7a-cb801f029ad5
	I0114 10:32:52.364246  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.364252  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.364257  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.364263  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.364268  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.364382  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:52.859017  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:52.859043  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.859053  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.859061  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.861311  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:52.861335  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.861346  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.861354  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.861362  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.861370  110500 round_trippers.go:580]     Audit-Id: 6fa55034-e1ef-4b07-869f-e699f2e6ad9b
	I0114 10:32:52.861379  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.861397  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.861538  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:52.862025  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:52.862039  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:52.862047  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:52.862054  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:52.863574  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:52.863592  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:52.863601  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:52 GMT
	I0114 10:32:52.863611  110500 round_trippers.go:580]     Audit-Id: 551ef0e8-c97f-4844-9422-c5752cb489bd
	I0114 10:32:52.863619  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:52.863627  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:52.863636  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:52.863646  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:52.863779  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.359608  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:53.359632  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.359645  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.359652  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.361956  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:53.361979  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.361987  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.361993  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.361998  110500 round_trippers.go:580]     Audit-Id: 060b65f4-4c9b-400d-923e-42224d7765d1
	I0114 10:32:53.362003  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.362008  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.362013  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.362104  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"706","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6771 chars]
	I0114 10:32:53.362577  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.362593  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.362603  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.362610  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.364347  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.364377  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.364386  110500 round_trippers.go:580]     Audit-Id: 97269ed2-d9cf-4bae-ae56-b417a88fc922
	I0114 10:32:53.364392  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.364398  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.364419  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.364430  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.364435  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.364547  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.364848  110500 pod_ready.go:102] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"False"
	I0114 10:32:53.859079  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f5dzh
	I0114 10:32:53.859100  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.859108  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.859114  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.861340  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:53.861359  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.861366  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.861371  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.861377  110500 round_trippers.go:580]     Audit-Id: 4dac6af2-31c0-4d5b-ad54-0b90bc13b7f1
	I0114 10:32:53.861382  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.861387  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.861392  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.861477  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6542 chars]
	I0114 10:32:53.861953  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.861968  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.861976  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.861982  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.863752  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.863776  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.863786  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.863795  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.863803  110500 round_trippers.go:580]     Audit-Id: 3d572e93-949f-499a-a293-4ddb3e2a2d6d
	I0114 10:32:53.863812  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.863821  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.863829  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.863946  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.864233  110500 pod_ready.go:92] pod "coredns-565d847f94-f5dzh" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.864250  110500 pod_ready.go:81] duration metric: took 16.39921044s waiting for pod "coredns-565d847f94-f5dzh" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.864258  110500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.864298  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-102822
	I0114 10:32:53.864306  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.864313  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.864318  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.865875  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.865896  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.865905  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.865916  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.865925  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.865938  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.865949  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.865961  110500 round_trippers.go:580]     Audit-Id: 82f791e6-7642-40ac-a8b0-fb679511ec02
	I0114 10:32:53.866052  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102822","namespace":"kube-system","uid":"1e27ecaf-1025-426a-a47b-3d2cf95d1090","resourceVersion":"777","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.mirror":"aef019d10f8e4135ccea898513a684dd","kubernetes.io/config.seen":"2023-01-14T10:28:42.123435085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6035 chars]
	I0114 10:32:53.866410  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.866422  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.866429  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.866436  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.867777  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.867797  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.867807  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.867816  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.867825  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.867838  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.867848  110500 round_trippers.go:580]     Audit-Id: 058bebb4-b275-46c2-9a74-1b5ca44db29a
	I0114 10:32:53.867861  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.867985  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.868296  110500 pod_ready.go:92] pod "etcd-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.868309  110500 pod_ready.go:81] duration metric: took 4.045313ms waiting for pod "etcd-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.868324  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.868372  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102822
	I0114 10:32:53.868380  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.868387  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.868394  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.870207  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.870223  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.870234  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.870243  110500 round_trippers.go:580]     Audit-Id: 1d446373-a202-405c-b237-c6843904c253
	I0114 10:32:53.870261  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.870270  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.870283  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.870295  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.870409  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102822","namespace":"kube-system","uid":"c74a88c9-d603-4a80-a194-de75c8d0a3a5","resourceVersion":"770","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.mirror":"a912ee0a84c59151c3514b96c1018750","kubernetes.io/config.seen":"2023-01-14T10:28:42.123458577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8421 chars]
	I0114 10:32:53.870795  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.870809  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.870819  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.870828  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.872366  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.872389  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.872399  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.872408  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.872424  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.872434  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.872447  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.872460  110500 round_trippers.go:580]     Audit-Id: 472b3cbd-28d7-44ac-a15d-56c46a3c4908
	I0114 10:32:53.872539  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.872804  110500 pod_ready.go:92] pod "kube-apiserver-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.872817  110500 pod_ready.go:81] duration metric: took 4.480396ms waiting for pod "kube-apiserver-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.872828  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.872870  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102822
	I0114 10:32:53.872880  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.872890  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.872900  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.874460  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.874482  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.874490  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.874497  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.874504  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.874510  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.874515  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.874520  110500 round_trippers.go:580]     Audit-Id: c397a327-3c31-4fec-abe6-15a2e07084e1
	I0114 10:32:53.874669  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102822","namespace":"kube-system","uid":"85c8a264-96f3-4fcf-affd-917b94bdd177","resourceVersion":"774","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.mirror":"25abfe2eda05e02954e17f09cf270068","kubernetes.io/config.seen":"2023-01-14T10:28:42.123460297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7996 chars]
	I0114 10:32:53.875051  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:53.875063  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.875070  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.875077  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.876617  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.876697  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.876721  110500 round_trippers.go:580]     Audit-Id: fec4f21d-1840-4d29-b02a-62b90536fe0e
	I0114 10:32:53.876728  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.876733  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.876741  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.876747  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.876754  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.876851  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:53.877124  110500 pod_ready.go:92] pod "kube-controller-manager-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.877137  110500 pod_ready.go:81] duration metric: took 4.30113ms waiting for pod "kube-controller-manager-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.877149  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.877191  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d5n6
	I0114 10:32:53.877201  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.877219  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.877234  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.878711  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.878727  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.878734  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.878742  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.878750  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.878777  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.878783  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.878789  110500 round_trippers.go:580]     Audit-Id: 84194089-138c-4790-b159-7929e98278bb
	I0114 10:32:53.878874  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4d5n6","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dba561b-e827-4a6e-afd9-11c68b7e4447","resourceVersion":"471","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5522 chars]
	I0114 10:32:53.879244  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:32:53.879258  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:53.879265  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:53.879271  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:53.880734  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:53.880759  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:53.880769  110500 round_trippers.go:580]     Audit-Id: f14db8a2-ec1f-4484-a93c-8313811b037d
	I0114 10:32:53.880779  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:53.880788  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:53.880796  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:53.880802  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:53.880808  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:53 GMT
	I0114 10:32:53.880892  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4","resourceVersion":"549","creationTimestamp":"2023-01-14T10:29:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1" [truncated 4430 chars]
	I0114 10:32:53.881144  110500 pod_ready.go:92] pod "kube-proxy-4d5n6" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:53.881161  110500 pod_ready.go:81] duration metric: took 4.002111ms waiting for pod "kube-proxy-4d5n6" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:53.881169  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.059627  110500 request.go:614] Waited for 178.374902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:54.059726  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bzd24
	I0114 10:32:54.059741  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.059754  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.059768  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.061871  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.061896  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.061908  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.061916  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.061923  110500 round_trippers.go:580]     Audit-Id: aed13262-a010-46cc-af38-a5bd25ab0d48
	I0114 10:32:54.061933  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.061944  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.061957  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.062158  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bzd24","generateName":"kube-proxy-","namespace":"kube-system","uid":"3191f786-4823-486a-90e6-be1b1180c23a","resourceVersion":"660","creationTimestamp":"2023-01-14T10:30:20Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 10:32:54.259954  110500 request.go:614] Waited for 197.349508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:54.260021  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m03
	I0114 10:32:54.260027  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.260035  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.260044  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.261934  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:54.261954  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.261961  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.261967  110500 round_trippers.go:580]     Audit-Id: 4baf5a74-b2c4-429b-8b3c-4634d55c2954
	I0114 10:32:54.261972  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.261977  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.261983  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.261991  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.262089  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m03","uid":"7fb9d125-0cce-4853-b9a9-9348e20e7ae7","resourceVersion":"674","creationTimestamp":"2023-01-14T10:31:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu
mes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{"." [truncated 4248 chars]
	I0114 10:32:54.262369  110500 pod_ready.go:92] pod "kube-proxy-bzd24" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:54.262382  110500 pod_ready.go:81] duration metric: took 381.20861ms waiting for pod "kube-proxy-bzd24" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.262392  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.459841  110500 request.go:614] Waited for 197.376659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:54.459903  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlcll
	I0114 10:32:54.459909  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.459917  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.459930  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.462326  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.462351  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.462362  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.462371  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.462381  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.462394  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.462407  110500 round_trippers.go:580]     Audit-Id: d19a543b-1f00-4a60-a98d-7c9d97051362
	I0114 10:32:54.462420  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.462560  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlcll","generateName":"kube-proxy-","namespace":"kube-system","uid":"91e05737-5cbf-404c-8b7c-75045f584885","resourceVersion":"718","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d5be29ed-a3bc-4e65-829c-e22d7a437814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5be29ed-a3bc-4e65-829c-e22d7a437814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5727 chars]
	I0114 10:32:54.659432  110500 request.go:614] Waited for 196.351122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:54.659482  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:54.659487  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.659495  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.659501  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.661545  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.661569  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.661580  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.661589  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.661598  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.661610  110500 round_trippers.go:580]     Audit-Id: e4c1d17c-ce25-4fe7-a6f5-b07ee5fc48ab
	I0114 10:32:54.661624  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.661635  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.661721  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:54.662011  110500 pod_ready.go:92] pod "kube-proxy-qlcll" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:54.662023  110500 pod_ready.go:81] duration metric: took 399.620307ms waiting for pod "kube-proxy-qlcll" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.662034  110500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:54.859494  110500 request.go:614] Waited for 197.400011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:54.859552  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102822
	I0114 10:32:54.859557  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:54.859564  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:54.859571  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:54.861821  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:54.861852  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:54.861865  110500 round_trippers.go:580]     Audit-Id: ccc6f823-3989-4b53-8ff0-d1337b0b0a61
	I0114 10:32:54.861874  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:54.861886  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:54.861899  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:54.861909  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:54.861919  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:54 GMT
	I0114 10:32:54.862042  110500 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102822","namespace":"kube-system","uid":"63ee442e-88de-44d9-8512-98c56f1b4942","resourceVersion":"725","creationTimestamp":"2023-01-14T10:28:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.mirror":"d237517a42c84f1ce3a5813805068975","kubernetes.io/config.seen":"2023-01-14T10:28:42.123461701Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4878 chars]
	I0114 10:32:55.059823  110500 request.go:614] Waited for 197.354082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:55.059872  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822
	I0114 10:32:55.059876  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.059884  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.059897  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.062004  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:55.062027  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.062038  110500 round_trippers.go:580]     Audit-Id: d564f563-4343-426b-a870-97784639d546
	I0114 10:32:55.062046  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.062056  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.062064  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.062073  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.062086  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.062203  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2023-01-14T10:28:39Z","fieldsType":"FieldsV [truncated 5247 chars]
	I0114 10:32:55.062558  110500 pod_ready.go:92] pod "kube-scheduler-multinode-102822" in "kube-system" namespace has status "Ready":"True"
	I0114 10:32:55.062577  110500 pod_ready.go:81] duration metric: took 400.532751ms waiting for pod "kube-scheduler-multinode-102822" in "kube-system" namespace to be "Ready" ...
	I0114 10:32:55.062591  110500 pod_ready.go:38] duration metric: took 17.803879464s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:32:55.062613  110500 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:32:55.062662  110500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:32:55.072334  110500 command_runner.go:130] > 1113
	I0114 10:32:55.072394  110500 api_server.go:71] duration metric: took 18.002517039s to wait for apiserver process to appear ...
	I0114 10:32:55.072408  110500 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:32:55.072418  110500 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:32:55.077405  110500 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:32:55.077450  110500 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0114 10:32:55.077454  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.077462  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.077468  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.078157  110500 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0114 10:32:55.078182  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.078189  110500 round_trippers.go:580]     Audit-Id: adb19759-4de9-46d5-95cf-8b480d9bd7f5
	I0114 10:32:55.078195  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.078200  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.078207  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.078213  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.078219  110500 round_trippers.go:580]     Content-Length: 263
	I0114 10:32:55.078224  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.078239  110500 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 10:32:55.078279  110500 api_server.go:140] control plane version: v1.25.3
	I0114 10:32:55.078291  110500 api_server.go:130] duration metric: took 5.878523ms to wait for apiserver health ...
	I0114 10:32:55.078298  110500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:32:55.259707  110500 request.go:614] Waited for 181.324482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.259776  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.259781  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.259791  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.259800  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.263248  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:55.263272  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.263280  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.263286  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.263292  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.263297  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.263303  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.263310  110500 round_trippers.go:580]     Audit-Id: 8a596230-8294-46c4-b65b-17375baa5d42
	I0114 10:32:55.263957  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84909 chars]
	I0114 10:32:55.266933  110500 system_pods.go:59] 12 kube-system pods found
	I0114 10:32:55.266958  110500 system_pods.go:61] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:55.266964  110500 system_pods.go:61] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running
	I0114 10:32:55.266968  110500 system_pods.go:61] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:55.266972  110500 system_pods.go:61] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:55.266976  110500 system_pods.go:61] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running
	I0114 10:32:55.266980  110500 system_pods.go:61] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:55.266986  110500 system_pods.go:61] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running
	I0114 10:32:55.266993  110500 system_pods.go:61] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:55.266998  110500 system_pods.go:61] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:55.267003  110500 system_pods.go:61] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:55.267007  110500 system_pods.go:61] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:55.267014  110500 system_pods.go:61] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:32:55.267026  110500 system_pods.go:74] duration metric: took 188.723685ms to wait for pod list to return data ...
	I0114 10:32:55.267033  110500 default_sa.go:34] waiting for default service account to be created ...
	I0114 10:32:55.459429  110500 request.go:614] Waited for 192.340757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0114 10:32:55.459508  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0114 10:32:55.459520  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.459532  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.459547  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.461495  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:32:55.461513  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.461520  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.461526  110500 round_trippers.go:580]     Audit-Id: 29aeece9-a829-4503-b39f-8d5844636b92
	I0114 10:32:55.461531  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.461536  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.461541  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.461547  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.461552  110500 round_trippers.go:580]     Content-Length: 261
	I0114 10:32:55.461569  110500 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ed5470f9-ec28-44cb-ac49-0dbdbeab7993","resourceVersion":"329","creationTimestamp":"2023-01-14T10:28:55Z"}}]}
	I0114 10:32:55.461729  110500 default_sa.go:45] found service account: "default"
	I0114 10:32:55.461745  110500 default_sa.go:55] duration metric: took 194.706664ms for default service account to be created ...
	I0114 10:32:55.461752  110500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 10:32:55.659076  110500 request.go:614] Waited for 197.269144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.659133  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0114 10:32:55.659137  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.659145  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.659152  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.662730  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:32:55.662755  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.662770  110500 round_trippers.go:580]     Audit-Id: d073e5a7-e138-4f76-b486-e769fbc5f5e6
	I0114 10:32:55.662778  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.662786  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.662794  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.662804  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.662817  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.663478  110500 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"coredns-565d847f94-f5dzh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb","resourceVersion":"789","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a6b024c7-260a-47e5-8b29-d457680b1db4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6b024c7-260a-47e5-8b29-d457680b1db4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84909 chars]
	I0114 10:32:55.666072  110500 system_pods.go:86] 12 kube-system pods found
	I0114 10:32:55.666095  110500 system_pods.go:89] "coredns-565d847f94-f5dzh" [7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb] Running
	I0114 10:32:55.666101  110500 system_pods.go:89] "etcd-multinode-102822" [1e27ecaf-1025-426a-a47b-3d2cf95d1090] Running
	I0114 10:32:55.666106  110500 system_pods.go:89] "kindnet-bwgvn" [56ac9738-db28-4d92-8f39-218e59fb8fa0] Running
	I0114 10:32:55.666111  110500 system_pods.go:89] "kindnet-fb2ng" [cc3bf4e2-0d87-4fd3-b981-174cef444038] Running
	I0114 10:32:55.666116  110500 system_pods.go:89] "kindnet-zm4vf" [e664ab33-93db-45e5-a147-81c14dd05837] Running
	I0114 10:32:55.666123  110500 system_pods.go:89] "kube-apiserver-multinode-102822" [c74a88c9-d603-4a80-a194-de75c8d0a3a5] Running
	I0114 10:32:55.666132  110500 system_pods.go:89] "kube-controller-manager-multinode-102822" [85c8a264-96f3-4fcf-affd-917b94bdd177] Running
	I0114 10:32:55.666138  110500 system_pods.go:89] "kube-proxy-4d5n6" [2dba561b-e827-4a6e-afd9-11c68b7e4447] Running
	I0114 10:32:55.666145  110500 system_pods.go:89] "kube-proxy-bzd24" [3191f786-4823-486a-90e6-be1b1180c23a] Running
	I0114 10:32:55.666149  110500 system_pods.go:89] "kube-proxy-qlcll" [91e05737-5cbf-404c-8b7c-75045f584885] Running
	I0114 10:32:55.666153  110500 system_pods.go:89] "kube-scheduler-multinode-102822" [63ee442e-88de-44d9-8512-98c56f1b4942] Running
	I0114 10:32:55.666163  110500 system_pods.go:89] "storage-provisioner" [ae50847f-5144-4e4b-a340-5cbd0bbb55a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:32:55.666171  110500 system_pods.go:126] duration metric: took 204.414948ms to wait for k8s-apps to be running ...
	I0114 10:32:55.666181  110500 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 10:32:55.666219  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:32:55.675950  110500 system_svc.go:56] duration metric: took 9.754834ms WaitForService to wait for kubelet.
	I0114 10:32:55.675980  110500 kubeadm.go:573] duration metric: took 18.606132423s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 10:32:55.675999  110500 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:32:55.859434  110500 request.go:614] Waited for 183.362713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:55.859502  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0114 10:32:55.859514  110500 round_trippers.go:469] Request Headers:
	I0114 10:32:55.859522  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:32:55.859528  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:32:55.862021  110500 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 10:32:55.862052  110500 round_trippers.go:577] Response Headers:
	I0114 10:32:55.862064  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:32:55.862073  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:32:55.862082  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:32:55.862095  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:55 GMT
	I0114 10:32:55.862111  110500 round_trippers.go:580]     Audit-Id: 6dd48a4e-f50e-492a-8dff-09e669805baa
	I0114 10:32:55.862119  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:32:55.862314  110500 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"793"},"items":[{"metadata":{"name":"multinode-102822","uid":"8decbace-8574-4449-b645-2dcf216194fb","resourceVersion":"679","creationTimestamp":"2023-01-14T10:28:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-102822","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T10_28_43_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15962 chars]
	I0114 10:32:55.862905  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862918  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862930  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862936  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862941  110500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:32:55.862945  110500 node_conditions.go:123] node cpu capacity is 8
	I0114 10:32:55.862952  110500 node_conditions.go:105] duration metric: took 186.947551ms to run NodePressure ...
	I0114 10:32:55.862961  110500 start.go:217] waiting for startup goroutines ...
	I0114 10:32:55.863404  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:55.863497  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:55.866941  110500 out.go:177] * Starting worker node multinode-102822-m02 in cluster multinode-102822
	I0114 10:32:55.868288  110500 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:32:55.869765  110500 out.go:177] * Pulling base image ...
	I0114 10:32:55.871152  110500 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:32:55.871175  110500 cache.go:57] Caching tarball of preloaded images
	I0114 10:32:55.871229  110500 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:32:55.871304  110500 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:32:55.871330  110500 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:32:55.871441  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:55.893561  110500 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:32:55.893584  110500 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:32:55.893609  110500 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:32:55.893646  110500 start.go:364] acquiring machines lock for multinode-102822-m02: {Name:mk25af419661492cbd58b718b64b51677c98136a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:32:55.893781  110500 start.go:368] acquired machines lock for "multinode-102822-m02" in 104.709µs
	I0114 10:32:55.893802  110500 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:32:55.893807  110500 fix.go:55] fixHost starting: m02
	I0114 10:32:55.894020  110500 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:32:55.917751  110500 fix.go:103] recreateIfNeeded on multinode-102822-m02: state=Stopped err=<nil>
	W0114 10:32:55.917777  110500 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:32:55.920259  110500 out.go:177] * Restarting existing docker container for "multinode-102822-m02" ...
	I0114 10:32:55.921900  110500 cli_runner.go:164] Run: docker start multinode-102822-m02
	I0114 10:32:56.303574  110500 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:32:56.328701  110500 kic.go:426] container "multinode-102822-m02" state is running.
	I0114 10:32:56.329001  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:32:56.353818  110500 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/config.json ...
	I0114 10:32:56.354054  110500 machine.go:88] provisioning docker machine ...
	I0114 10:32:56.354080  110500 ubuntu.go:169] provisioning hostname "multinode-102822-m02"
	I0114 10:32:56.354126  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:56.378925  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:56.379088  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32872 <nil> <nil>}
	I0114 10:32:56.379107  110500 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102822-m02 && echo "multinode-102822-m02" | sudo tee /etc/hostname
	I0114 10:32:56.379767  110500 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47252->127.0.0.1:32872: read: connection reset by peer
	I0114 10:32:59.504292  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102822-m02
	
	I0114 10:32:59.504372  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.529104  110500 main.go:134] libmachine: Using SSH client type: native
	I0114 10:32:59.529255  110500 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32872 <nil> <nil>}
	I0114 10:32:59.529273  110500 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102822-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102822-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102822-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:32:59.643446  110500 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:32:59.643478  110500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:32:59.643495  110500 ubuntu.go:177] setting up certificates
	I0114 10:32:59.643503  110500 provision.go:83] configureAuth start
	I0114 10:32:59.643550  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:32:59.666898  110500 provision.go:138] copyHostCerts
	I0114 10:32:59.666931  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:59.666953  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:32:59.666961  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:32:59.667021  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:32:59.667087  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:59.667104  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:32:59.667109  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:32:59.667132  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:32:59.667170  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:59.667183  110500 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:32:59.667189  110500 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:32:59.667207  110500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:32:59.667255  110500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.multinode-102822-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-102822-m02]
	I0114 10:32:59.772545  110500 provision.go:172] copyRemoteCerts
	I0114 10:32:59.772598  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:32:59.772629  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.795246  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:32:59.879398  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 10:32:59.879459  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:32:59.896300  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 10:32:59.896363  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0114 10:32:59.913524  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 10:32:59.913588  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:32:59.930401  110500 provision.go:86] duration metric: configureAuth took 286.883524ms
	I0114 10:32:59.930432  110500 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:32:59.930616  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:32:59.930627  110500 machine.go:91] provisioned docker machine in 3.576558371s
	I0114 10:32:59.930634  110500 start.go:300] post-start starting for "multinode-102822-m02" (driver="docker")
	I0114 10:32:59.930640  110500 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:32:59.930681  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:32:59.930713  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:32:59.954609  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.039058  110500 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:33:00.041623  110500 command_runner.go:130] > NAME="Ubuntu"
	I0114 10:33:00.041638  110500 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 10:33:00.041643  110500 command_runner.go:130] > ID=ubuntu
	I0114 10:33:00.041652  110500 command_runner.go:130] > ID_LIKE=debian
	I0114 10:33:00.041659  110500 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 10:33:00.041669  110500 command_runner.go:130] > VERSION_ID="20.04"
	I0114 10:33:00.041686  110500 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 10:33:00.041695  110500 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 10:33:00.041702  110500 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 10:33:00.041715  110500 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 10:33:00.041722  110500 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 10:33:00.041727  110500 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 10:33:00.041809  110500 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:33:00.041826  110500 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:33:00.041837  110500 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:33:00.041848  110500 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:33:00.041863  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:33:00.041918  110500 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:33:00.042001  110500 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:33:00.042015  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /etc/ssl/certs/103062.pem
	I0114 10:33:00.042098  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:33:00.048428  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:33:00.065256  110500 start.go:303] post-start completed in 134.608719ms
	I0114 10:33:00.065333  110500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:33:00.065370  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.089347  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.171707  110500 command_runner.go:130] > 18%!
	(MISSING)I0114 10:33:00.171953  110500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:33:00.175843  110500 command_runner.go:130] > 239G
	I0114 10:33:00.175869  110500 fix.go:57] fixHost completed within 4.282059343s
	I0114 10:33:00.175880  110500 start.go:83] releasing machines lock for "multinode-102822-m02", held for 4.282085064s
	I0114 10:33:00.175958  110500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:33:00.204131  110500 out.go:177] * Found network options:
	I0114 10:33:00.205771  110500 out.go:177]   - NO_PROXY=192.168.58.2
	W0114 10:33:00.207135  110500 proxy.go:119] fail to check proxy env: Error ip not in block
	W0114 10:33:00.207169  110500 proxy.go:119] fail to check proxy env: Error ip not in block
	I0114 10:33:00.207243  110500 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:33:00.207283  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.207345  110500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:33:00.207411  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:33:00.233561  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.237305  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32872 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:33:00.349674  110500 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 10:33:00.349754  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:33:00.358943  110500 docker.go:189] disabling docker service ...
	I0114 10:33:00.358987  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:33:00.368539  110500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:33:00.377330  110500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:33:00.458087  110500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:33:00.532879  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:33:00.542016  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:33:00.553731  110500 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:33:00.553758  110500 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0114 10:33:00.554499  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:33:00.562521  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:33:00.570518  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:33:00.578409  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:33:00.586533  110500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:33:00.593065  110500 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0114 10:33:00.593116  110500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:33:00.599333  110500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:33:00.669768  110500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:33:00.743175  110500 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:33:00.743240  110500 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:33:00.746690  110500 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0114 10:33:00.746715  110500 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 10:33:00.746721  110500 command_runner.go:130] > Device: fch/252d	Inode: 118         Links: 1
	I0114 10:33:00.746728  110500 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:33:00.746734  110500 command_runner.go:130] > Access: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746739  110500 command_runner.go:130] > Modify: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746744  110500 command_runner.go:130] > Change: 2023-01-14 10:33:00.739210057 +0000
	I0114 10:33:00.746750  110500 command_runner.go:130] >  Birth: -
	I0114 10:33:00.746772  110500 start.go:472] Will wait 60s for crictl version
	I0114 10:33:00.746812  110500 ssh_runner.go:195] Run: which crictl
	I0114 10:33:00.749846  110500 command_runner.go:130] > /usr/bin/crictl
	I0114 10:33:00.749902  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:33:00.774327  110500 command_runner.go:130] ! time="2023-01-14T10:33:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:33:00.774386  110500 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:33:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:33:15.181491  110500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:33:15.203527  110500 command_runner.go:130] > Version:  0.1.0
	I0114 10:33:15.203557  110500 command_runner.go:130] > RuntimeName:  containerd
	I0114 10:33:15.203564  110500 command_runner.go:130] > RuntimeVersion:  1.6.10
	I0114 10:33:15.203572  110500 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0114 10:33:15.203593  110500 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:33:15.203645  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:33:15.225426  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:33:15.226779  110500 ssh_runner.go:195] Run: containerd --version
	I0114 10:33:15.248561  110500 command_runner.go:130] > containerd containerd.io 1.6.10 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
	I0114 10:33:15.252201  110500 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:33:15.253862  110500 out.go:177]   - env NO_PROXY=192.168.58.2
	I0114 10:33:15.255234  110500 cli_runner.go:164] Run: docker network inspect multinode-102822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:33:15.278552  110500 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0114 10:33:15.281742  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:33:15.290843  110500 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822 for IP: 192.168.58.3
	I0114 10:33:15.290938  110500 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:33:15.290983  110500 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:33:15.291001  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 10:33:15.291018  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 10:33:15.291034  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:33:15.291044  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:33:15.291086  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:33:15.291122  110500 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:33:15.291137  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:33:15.291172  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:33:15.291200  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:33:15.291232  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:33:15.291294  110500 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:33:15.291328  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem -> /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.291340  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.291350  110500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.291733  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:33:15.308950  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:33:15.326682  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:33:15.343586  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:33:15.360810  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:33:15.378267  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:33:15.394623  110500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:33:15.411806  110500 ssh_runner.go:195] Run: openssl version
	I0114 10:33:15.416453  110500 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 10:33:15.416527  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:33:15.423431  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426436  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426476  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.426513  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:33:15.431093  110500 command_runner.go:130] > 51391683
	I0114 10:33:15.431280  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:33:15.438119  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:33:15.445156  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448124  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448172  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.448210  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:33:15.452776  110500 command_runner.go:130] > 3ec20f2e
	I0114 10:33:15.452816  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:33:15.459313  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:33:15.466112  110500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.468925  110500 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.469032  110500 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.469077  110500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:33:15.473645  110500 command_runner.go:130] > b5213941
	I0114 10:33:15.473797  110500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:33:15.480540  110500 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:33:15.504106  110500 command_runner.go:130] > {
	I0114 10:33:15.504131  110500 command_runner.go:130] >   "status": {
	I0114 10:33:15.504139  110500 command_runner.go:130] >     "conditions": [
	I0114 10:33:15.504148  110500 command_runner.go:130] >       {
	I0114 10:33:15.504158  110500 command_runner.go:130] >         "type": "RuntimeReady",
	I0114 10:33:15.504166  110500 command_runner.go:130] >         "status": true,
	I0114 10:33:15.504173  110500 command_runner.go:130] >         "reason": "",
	I0114 10:33:15.504181  110500 command_runner.go:130] >         "message": ""
	I0114 10:33:15.504191  110500 command_runner.go:130] >       },
	I0114 10:33:15.504197  110500 command_runner.go:130] >       {
	I0114 10:33:15.504207  110500 command_runner.go:130] >         "type": "NetworkReady",
	I0114 10:33:15.504216  110500 command_runner.go:130] >         "status": true,
	I0114 10:33:15.504226  110500 command_runner.go:130] >         "reason": "",
	I0114 10:33:15.504245  110500 command_runner.go:130] >         "message": ""
	I0114 10:33:15.504252  110500 command_runner.go:130] >       }
	I0114 10:33:15.504255  110500 command_runner.go:130] >     ]
	I0114 10:33:15.504259  110500 command_runner.go:130] >   },
	I0114 10:33:15.504263  110500 command_runner.go:130] >   "cniconfig": {
	I0114 10:33:15.504267  110500 command_runner.go:130] >     "PluginDirs": [
	I0114 10:33:15.504272  110500 command_runner.go:130] >       "/opt/cni/bin"
	I0114 10:33:15.504276  110500 command_runner.go:130] >     ],
	I0114 10:33:15.504281  110500 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.mk",
	I0114 10:33:15.504286  110500 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0114 10:33:15.504290  110500 command_runner.go:130] >     "Prefix": "eth",
	I0114 10:33:15.504295  110500 command_runner.go:130] >     "Networks": [
	I0114 10:33:15.504299  110500 command_runner.go:130] >       {
	I0114 10:33:15.504306  110500 command_runner.go:130] >         "Config": {
	I0114 10:33:15.504311  110500 command_runner.go:130] >           "Name": "cni-loopback",
	I0114 10:33:15.504319  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:33:15.504325  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:33:15.504329  110500 command_runner.go:130] >             {
	I0114 10:33:15.504334  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504341  110500 command_runner.go:130] >                 "type": "loopback",
	I0114 10:33:15.504345  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:33:15.504352  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504356  110500 command_runner.go:130] >               },
	I0114 10:33:15.504361  110500 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0114 10:33:15.504367  110500 command_runner.go:130] >             }
	I0114 10:33:15.504370  110500 command_runner.go:130] >           ],
	I0114 10:33:15.504382  110500 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0114 10:33:15.504386  110500 command_runner.go:130] >         },
	I0114 10:33:15.504391  110500 command_runner.go:130] >         "IFName": "lo"
	I0114 10:33:15.504397  110500 command_runner.go:130] >       },
	I0114 10:33:15.504400  110500 command_runner.go:130] >       {
	I0114 10:33:15.504406  110500 command_runner.go:130] >         "Config": {
	I0114 10:33:15.504413  110500 command_runner.go:130] >           "Name": "kindnet",
	I0114 10:33:15.504419  110500 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0114 10:33:15.504424  110500 command_runner.go:130] >           "Plugins": [
	I0114 10:33:15.504431  110500 command_runner.go:130] >             {
	I0114 10:33:15.504435  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504442  110500 command_runner.go:130] >                 "type": "ptp",
	I0114 10:33:15.504446  110500 command_runner.go:130] >                 "ipam": {
	I0114 10:33:15.504452  110500 command_runner.go:130] >                   "type": "host-local"
	I0114 10:33:15.504456  110500 command_runner.go:130] >                 },
	I0114 10:33:15.504460  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504466  110500 command_runner.go:130] >               },
	I0114 10:33:15.504480  110500 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.1.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0114 10:33:15.504488  110500 command_runner.go:130] >             },
	I0114 10:33:15.504492  110500 command_runner.go:130] >             {
	I0114 10:33:15.504499  110500 command_runner.go:130] >               "Network": {
	I0114 10:33:15.504503  110500 command_runner.go:130] >                 "type": "portmap",
	I0114 10:33:15.504510  110500 command_runner.go:130] >                 "capabilities": {
	I0114 10:33:15.504515  110500 command_runner.go:130] >                   "portMappings": true
	I0114 10:33:15.504521  110500 command_runner.go:130] >                 },
	I0114 10:33:15.504527  110500 command_runner.go:130] >                 "ipam": {},
	I0114 10:33:15.504535  110500 command_runner.go:130] >                 "dns": {}
	I0114 10:33:15.504540  110500 command_runner.go:130] >               },
	I0114 10:33:15.504550  110500 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0114 10:33:15.504554  110500 command_runner.go:130] >             }
	I0114 10:33:15.504561  110500 command_runner.go:130] >           ],
	I0114 10:33:15.504591  110500 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.1.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0114 10:33:15.504601  110500 command_runner.go:130] >         },
	I0114 10:33:15.504605  110500 command_runner.go:130] >         "IFName": "eth0"
	I0114 10:33:15.504609  110500 command_runner.go:130] >       }
	I0114 10:33:15.504612  110500 command_runner.go:130] >     ]
	I0114 10:33:15.504618  110500 command_runner.go:130] >   },
	I0114 10:33:15.504622  110500 command_runner.go:130] >   "config": {
	I0114 10:33:15.504626  110500 command_runner.go:130] >     "containerd": {
	I0114 10:33:15.504631  110500 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0114 10:33:15.504637  110500 command_runner.go:130] >       "defaultRuntimeName": "default",
	I0114 10:33:15.504641  110500 command_runner.go:130] >       "defaultRuntime": {
	I0114 10:33:15.504649  110500 command_runner.go:130] >         "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504654  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:33:15.504658  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:33:15.504665  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:33:15.504670  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:33:15.504674  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:33:15.504681  110500 command_runner.go:130] >         "options": null,
	I0114 10:33:15.504688  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:33:15.504697  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:33:15.504704  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:33:15.504715  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:33:15.504723  110500 command_runner.go:130] >       },
	I0114 10:33:15.504730  110500 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0114 10:33:15.504740  110500 command_runner.go:130] >         "runtimeType": "",
	I0114 10:33:15.504751  110500 command_runner.go:130] >         "runtimePath": "",
	I0114 10:33:15.504757  110500 command_runner.go:130] >         "runtimeEngine": "",
	I0114 10:33:15.504766  110500 command_runner.go:130] >         "PodAnnotations": null,
	I0114 10:33:15.504772  110500 command_runner.go:130] >         "ContainerAnnotations": null,
	I0114 10:33:15.504781  110500 command_runner.go:130] >         "runtimeRoot": "",
	I0114 10:33:15.504788  110500 command_runner.go:130] >         "options": null,
	I0114 10:33:15.504800  110500 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0114 10:33:15.504810  110500 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0114 10:33:15.504818  110500 command_runner.go:130] >         "cniConfDir": "",
	I0114 10:33:15.504823  110500 command_runner.go:130] >         "cniMaxConfNum": 0
	I0114 10:33:15.504829  110500 command_runner.go:130] >       },
	I0114 10:33:15.504833  110500 command_runner.go:130] >       "runtimes": {
	I0114 10:33:15.504839  110500 command_runner.go:130] >         "default": {
	I0114 10:33:15.504844  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504850  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:33:15.504854  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:33:15.504859  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:33:15.504865  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:33:15.504869  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:33:15.504876  110500 command_runner.go:130] >           "options": null,
	I0114 10:33:15.504884  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:33:15.504891  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:33:15.504896  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:33:15.504903  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:33:15.504907  110500 command_runner.go:130] >         },
	I0114 10:33:15.504914  110500 command_runner.go:130] >         "runc": {
	I0114 10:33:15.504919  110500 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0114 10:33:15.504926  110500 command_runner.go:130] >           "runtimePath": "",
	I0114 10:33:15.504930  110500 command_runner.go:130] >           "runtimeEngine": "",
	I0114 10:33:15.504937  110500 command_runner.go:130] >           "PodAnnotations": null,
	I0114 10:33:15.504942  110500 command_runner.go:130] >           "ContainerAnnotations": null,
	I0114 10:33:15.504949  110500 command_runner.go:130] >           "runtimeRoot": "",
	I0114 10:33:15.504953  110500 command_runner.go:130] >           "options": {
	I0114 10:33:15.504968  110500 command_runner.go:130] >             "SystemdCgroup": false
	I0114 10:33:15.504974  110500 command_runner.go:130] >           },
	I0114 10:33:15.504980  110500 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0114 10:33:15.504985  110500 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0114 10:33:15.504991  110500 command_runner.go:130] >           "cniConfDir": "",
	I0114 10:33:15.504997  110500 command_runner.go:130] >           "cniMaxConfNum": 0
	I0114 10:33:15.505001  110500 command_runner.go:130] >         }
	I0114 10:33:15.505005  110500 command_runner.go:130] >       },
	I0114 10:33:15.505011  110500 command_runner.go:130] >       "noPivot": false,
	I0114 10:33:15.505016  110500 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0114 10:33:15.505024  110500 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0114 10:33:15.505029  110500 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0114 10:33:15.505035  110500 command_runner.go:130] >     },
	I0114 10:33:15.505039  110500 command_runner.go:130] >     "cni": {
	I0114 10:33:15.505047  110500 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0114 10:33:15.505053  110500 command_runner.go:130] >       "confDir": "/etc/cni/net.mk",
	I0114 10:33:15.505057  110500 command_runner.go:130] >       "maxConfNum": 1,
	I0114 10:33:15.505065  110500 command_runner.go:130] >       "confTemplate": "",
	I0114 10:33:15.505070  110500 command_runner.go:130] >       "ipPref": ""
	I0114 10:33:15.505077  110500 command_runner.go:130] >     },
	I0114 10:33:15.505081  110500 command_runner.go:130] >     "registry": {
	I0114 10:33:15.505088  110500 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0114 10:33:15.505092  110500 command_runner.go:130] >       "mirrors": null,
	I0114 10:33:15.505099  110500 command_runner.go:130] >       "configs": null,
	I0114 10:33:15.505103  110500 command_runner.go:130] >       "auths": null,
	I0114 10:33:15.505109  110500 command_runner.go:130] >       "headers": null
	I0114 10:33:15.505114  110500 command_runner.go:130] >     },
	I0114 10:33:15.505120  110500 command_runner.go:130] >     "imageDecryption": {
	I0114 10:33:15.505124  110500 command_runner.go:130] >       "keyModel": "node"
	I0114 10:33:15.505130  110500 command_runner.go:130] >     },
	I0114 10:33:15.505134  110500 command_runner.go:130] >     "disableTCPService": true,
	I0114 10:33:15.505141  110500 command_runner.go:130] >     "streamServerAddress": "",
	I0114 10:33:15.505145  110500 command_runner.go:130] >     "streamServerPort": "10010",
	I0114 10:33:15.505150  110500 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0114 10:33:15.505154  110500 command_runner.go:130] >     "enableSelinux": false,
	I0114 10:33:15.505159  110500 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0114 10:33:15.505165  110500 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.8",
	I0114 10:33:15.505169  110500 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0114 10:33:15.505176  110500 command_runner.go:130] >     "systemdCgroup": false,
	I0114 10:33:15.505180  110500 command_runner.go:130] >     "enableTLSStreaming": false,
	I0114 10:33:15.505187  110500 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0114 10:33:15.505192  110500 command_runner.go:130] >       "tlsCertFile": "",
	I0114 10:33:15.505198  110500 command_runner.go:130] >       "tlsKeyFile": ""
	I0114 10:33:15.505202  110500 command_runner.go:130] >     },
	I0114 10:33:15.505207  110500 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0114 10:33:15.505214  110500 command_runner.go:130] >     "disableCgroup": false,
	I0114 10:33:15.505218  110500 command_runner.go:130] >     "disableApparmor": false,
	I0114 10:33:15.505228  110500 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0114 10:33:15.505238  110500 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0114 10:33:15.505243  110500 command_runner.go:130] >     "disableProcMount": false,
	I0114 10:33:15.505247  110500 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0114 10:33:15.505252  110500 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0114 10:33:15.505257  110500 command_runner.go:130] >     "disableHugetlbController": true,
	I0114 10:33:15.505266  110500 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0114 10:33:15.505271  110500 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0114 10:33:15.505278  110500 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0114 10:33:15.505283  110500 command_runner.go:130] >     "enableUnprivilegedPorts": false,
	I0114 10:33:15.505292  110500 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0114 10:33:15.505298  110500 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0114 10:33:15.505306  110500 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0114 10:33:15.505312  110500 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0114 10:33:15.505320  110500 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0114 10:33:15.505324  110500 command_runner.go:130] >   },
	I0114 10:33:15.505330  110500 command_runner.go:130] >   "golang": "go1.18.8",
	I0114 10:33:15.505335  110500 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0114 10:33:15.505342  110500 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0114 10:33:15.505345  110500 command_runner.go:130] > }
	I0114 10:33:15.505500  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:33:15.505509  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:33:15.505519  110500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:33:15.505532  110500 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102822 NodeName:multinode-102822-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:33:15.505648  110500 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-102822-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:33:15.505707  110500 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-102822-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:33:15.505750  110500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:33:15.512450  110500 command_runner.go:130] > kubeadm
	I0114 10:33:15.512472  110500 command_runner.go:130] > kubectl
	I0114 10:33:15.512480  110500 command_runner.go:130] > kubelet
	I0114 10:33:15.513054  110500 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:33:15.513107  110500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0114 10:33:15.519785  110500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0114 10:33:15.534253  110500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:33:15.546554  110500 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:33:15.549454  110500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:33:15.558528  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:33:15.558803  110500 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:33:15.558758  110500 start.go:286] JoinCluster: &{Name:multinode-102822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102822 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:33:15.558854  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0114 10:33:15.558894  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:33:15.583347  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:33:15.717142  110500 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 
	I0114 10:33:15.717199  110500 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:15.717241  110500 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:33:15.717479  110500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102822-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0114 10:33:15.717511  110500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:33:15.742483  110500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:33:15.884475  110500 command_runner.go:130] > node/multinode-102822-m02 cordoned
	I0114 10:33:17.906182  110500 command_runner.go:130] > pod/busybox-65db55d5d6-jth2v deleted
	I0114 10:33:17.906203  110500 command_runner.go:130] > node/multinode-102822-m02 drained
	I0114 10:33:17.939112  110500 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0114 10:33:17.939144  110500 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bwgvn, kube-system/kube-proxy-4d5n6
	I0114 10:33:17.939174  110500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102822-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.221667006s)
	I0114 10:33:17.939190  110500 node.go:109] successfully drained node "m02"
	I0114 10:33:17.939610  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:33:17.939958  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:33:17.940363  110500 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0114 10:33:17.940424  110500 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:33:17.940434  110500 round_trippers.go:469] Request Headers:
	I0114 10:33:17.940446  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:33:17.940458  110500 round_trippers.go:473]     Content-Type: application/json
	I0114 10:33:17.940467  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:33:17.944032  110500 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 10:33:17.944056  110500 round_trippers.go:577] Response Headers:
	I0114 10:33:17.944066  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:33:17 GMT
	I0114 10:33:17.944074  110500 round_trippers.go:580]     Audit-Id: 1447952b-a9b6-4bad-af8b-4518cb0f651f
	I0114 10:33:17.944082  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:33:17.944092  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:33:17.944099  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:33:17.944106  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:33:17.944118  110500 round_trippers.go:580]     Content-Length: 171
	I0114 10:33:17.944147  110500 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-102822-m02","kind":"nodes","uid":"70a8a780-7247-4989-8ed3-55fedd3a32c4"}}
	I0114 10:33:17.944184  110500 node.go:125] successfully deleted node "m02"
	I0114 10:33:17.944197  110500 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:17.944219  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:17.944239  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:17.981302  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:18.010540  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:18.010570  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:18.010578  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:18.010587  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:18.010596  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:18.010604  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:18.010620  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:18.010632  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:18.010643  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:18.010656  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:18.010668  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:18.010680  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:18.089584  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:18.089615  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 10:33:18.108662  110500 command_runner.go:130] ! W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:18.108687  110500 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 10:33:18.108704  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:18.108710  110500 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 10:33:18.108718  110500 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 10:33:18.108729  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:18.108739  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 10:33:18.108785  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:18.108798  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:18.108809  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:18.140909  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:18.141227  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:18.141249  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:18.141257  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:18.233452  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:18.688329  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:18.688363  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:18.688374  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:18.689316  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:18.689334  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:18.689341  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:18.689347  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:18.689354  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:18.689370  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:18.689382  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:18.691216  110500 command_runner.go:130] ! W0114 10:33:18.140748     931 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:18.691240  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:18.691258  110500 retry.go:31] will retry after 11.645600532s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:17.980774     830 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.339744  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:30.339825  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:30.371164  110500 command_runner.go:130] ! W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:30.391839  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:30.474705  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:30.474732  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.476832  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:30.476856  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:30.476865  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:30.476871  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:30.476879  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:30.476888  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:30.476897  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:30.476907  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:30.476923  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:30.476933  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:30.476947  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:30.476959  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:30.476971  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:30.476983  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:30.476997  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 10:33:30.477078  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:30.477095  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:30.477113  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:30.507252  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:30.507323  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:30.507351  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:30.507364  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:30.510729  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:30.527843  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:30.527883  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:30.527895  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:30.527906  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:30.527919  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:30.527933  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:30.527944  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:30.527952  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:30.527963  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:30.527971  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:30.529841  110500 command_runner.go:130] ! W0114 10:33:30.506899    1326 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:30.529869  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:30.529889  110500 retry.go:31] will retry after 14.065712808s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:30.370724    1285 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.596274  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:33:44.596340  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:33:44.627227  110500 command_runner.go:130] ! W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:33:44.647728  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:33:44.732356  110500 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 10:33:44.732389  110500 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.734274  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:44.734294  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:33:44.734301  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:33:44.734305  110500 command_runner.go:130] > OS: Linux
	I0114 10:33:44.734310  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:33:44.734316  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:33:44.734323  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:33:44.734328  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:33:44.734333  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:33:44.734338  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:33:44.734347  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:33:44.734352  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:33:44.734357  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:33:44.734362  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:33:44.734372  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 10:33:44.734418  110500 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:33:44.734433  110500 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 10:33:44.734444  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 10:33:44.764574  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:33:44.764594  110500 command_runner.go:130] > [reset] No etcd config found. Assuming external etcd
	I0114 10:33:44.764606  110500 command_runner.go:130] > [reset] Please, manually reset etcd to prevent further issues
	I0114 10:33:44.764759  110500 command_runner.go:130] > [reset] Stopping the kubelet service
	I0114 10:33:44.768331  110500 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0114 10:33:44.785135  110500 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
	I0114 10:33:44.785167  110500 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0114 10:33:44.785178  110500 command_runner.go:130] > [reset] Deleting contents of stateful directories: [/var/lib/kubelet]
	I0114 10:33:44.785189  110500 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0114 10:33:44.785200  110500 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0114 10:33:44.785211  110500 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0114 10:33:44.785217  110500 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0114 10:33:44.785224  110500 command_runner.go:130] > to reset your system's IPVS tables.
	I0114 10:33:44.785239  110500 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0114 10:33:44.785247  110500 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0114 10:33:44.786933  110500 command_runner.go:130] ! W0114 10:33:44.764260    1389 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0114 10:33:44.786961  110500 start.go:316] successfully reset worker node "m02"
	I0114 10:33:44.786980  110500 retry.go:31] will retry after 20.804343684s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:44.626772    1349 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-102822-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 10:34:05.591739  110500 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:34:05.591806  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02"
	I0114 10:34:05.623871  110500 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 10:34:05.645267  110500 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:34:05.645298  110500 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:34:05.645308  110500 command_runner.go:130] > OS: Linux
	I0114 10:34:05.645316  110500 command_runner.go:130] > CGROUPS_CPU: enabled
	I0114 10:34:05.645324  110500 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0114 10:34:05.645332  110500 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0114 10:34:05.645344  110500 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0114 10:34:05.645357  110500 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0114 10:34:05.645370  110500 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0114 10:34:05.645390  110500 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0114 10:34:05.645402  110500 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0114 10:34:05.645416  110500 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0114 10:34:05.714434  110500 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 10:34:05.714463  110500 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 10:34:05.738856  110500 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:34:05.738889  110500 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:34:05.738898  110500 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 10:34:05.816843  110500 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0114 10:34:11.333257  110500 command_runner.go:130] > This node has joined the cluster:
	I0114 10:34:11.333282  110500 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0114 10:34:11.333289  110500 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0114 10:34:11.333295  110500 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0114 10:34:11.335498  110500 command_runner.go:130] ! W0114 10:34:05.623434    1411 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:34:11.335523  110500 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:34:11.335543  110500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2hh8x.7qkg8prlbl3fkpts --discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-102822-m02": (5.743723595s)
	I0114 10:34:11.335568  110500 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0114 10:34:11.487045  110500 start.go:288] JoinCluster complete in 55.928280874s
	I0114 10:34:11.487079  110500 cni.go:95] Creating CNI manager for ""
	I0114 10:34:11.487087  110500 cni.go:156] 3 nodes found, recommending kindnet
	I0114 10:34:11.487145  110500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:34:11.490436  110500 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 10:34:11.490463  110500 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 10:34:11.490478  110500 command_runner.go:130] > Device: 34h/52d	Inode: 565966      Links: 1
	I0114 10:34:11.490489  110500 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 10:34:11.490502  110500 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:34:11.490512  110500 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 10:34:11.490523  110500 command_runner.go:130] > Change: 2023-01-14 10:06:59.488187836 +0000
	I0114 10:34:11.490531  110500 command_runner.go:130] >  Birth: -
	I0114 10:34:11.490577  110500 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:34:11.490589  110500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:34:11.503251  110500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:34:11.654512  110500 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:34:11.656123  110500 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 10:34:11.657892  110500 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 10:34:11.665488  110500 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 10:34:11.669420  110500 start.go:212] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0114 10:34:11.671634  110500 out.go:177] * Verifying Kubernetes components...
	I0114 10:34:11.673414  110500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:34:11.682991  110500 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:34:11.683197  110500 kapi.go:59] client config for multinode-102822: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/multinode-102822/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:34:11.683407  110500 node_ready.go:35] waiting up to 6m0s for node "multinode-102822-m02" to be "Ready" ...
	I0114 10:34:11.683462  110500 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-102822-m02
	I0114 10:34:11.683469  110500 round_trippers.go:469] Request Headers:
	I0114 10:34:11.683476  110500 round_trippers.go:473]     Accept: application/json, */*
	I0114 10:34:11.683486  110500 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0114 10:34:11.685497  110500 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 10:34:11.685521  110500 round_trippers.go:577] Response Headers:
	I0114 10:34:11.685532  110500 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 10:34:11.685540  110500 round_trippers.go:580]     Content-Type: application/json
	I0114 10:34:11.685548  110500 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab714c18-4188-4b94-b521-f42fc661c2c7
	I0114 10:34:11.685558  110500 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ad1c98a-04e3-46fb-bd92-05aa51f02125
	I0114 10:34:11.685571  110500 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:34:11 GMT
	I0114 10:34:11.685584  110500 round_trippers.go:580]     Audit-Id: f79a2115-26cd-46bc-8cab-079a2a0ca5bf
	I0114 10:34:11.685686  110500 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102822-m02","uid":"f1608f12-9a61-41d8-b38b-2fa2b878a3bb","resourceVersion":"915","creationTimestamp":"2023-01-14T10:33:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102822-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:33:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io
/os":{}}}}},{"manager":"kube-controller-manager","operation":"Update"," [truncated 4761 chars]
	I0114 10:34:11.686002  110500 node_ready.go:53] node "multinode-102822-m02" has status "Ready":"Unknown"
	I0114 10:34:11.686020  110500 node_ready.go:38] duration metric: took 2.599177ms waiting for node "multinode-102822-m02" to be "Ready" ...
	I0114 10:34:11.687814  110500 out.go:177] 
	W0114 10:34:11.689189  110500 out.go:239] X Exiting due to GUEST_START: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: node "multinode-102822-m02" has status "Ready":"Unknown"
	W0114 10:34:11.689225  110500 out.go:239] * 
	W0114 10:34:11.690030  110500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:34:11.691589  110500 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	ff74c8ad1795d       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   2d7e3cdf99356
	5dd8ac1a35968       beaaf00edd38a       About a minute ago   Running             kube-proxy                1                   872946e88e635
	778e081cfbf69       8c811b4aec35f       About a minute ago   Running             busybox                   1                   99cb0cb52bd50
	6e3feef784cf5       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   d78446cc3352d
	15ee53e6aba5d       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   2d7e3cdf99356
	a2bee209de6eb       5185b96f0becf       About a minute ago   Running             coredns                   1                   21b5c4e68a1af
	b38b962e79724       a8a176a5d5d69       2 minutes ago        Running             etcd                      1                   e3e44fec0d38d
	9f38caa8f201e       0346dbd74bcb9       2 minutes ago        Running             kube-apiserver            1                   a005864bc203c
	2d6d8d7cecf8f       6039992312758       2 minutes ago        Running             kube-controller-manager   1                   370d556f0b87e
	fcb2330166522       6d23ec0e8b87e       2 minutes ago        Running             kube-scheduler            1                   055c0d220f8c0
	76169c252da32       8c811b4aec35f       4 minutes ago        Exited              busybox                   0                   e7e7b3fb5d878
	dd932b8cd4a02       5185b96f0becf       5 minutes ago        Exited              coredns                   0                   a5e06867ee15a
	fa1fbfcc6ff2a       d6e3e26021b60       5 minutes ago        Exited              kindnet-cni               0                   92b2f3fda0a5f
	6391dbda6c818       beaaf00edd38a       5 minutes ago        Exited              kube-proxy                0                   aec55d29d9fbb
	1cb572f06fea4       6039992312758       5 minutes ago        Exited              kube-controller-manager   0                   1543db59f0d6a
	9a1ebe17670ca       0346dbd74bcb9       5 minutes ago        Exited              kube-apiserver            0                   f17729d9ba6b0
	7263297512701       6d23ec0e8b87e       5 minutes ago        Exited              kube-scheduler            0                   6c251bcc8b6a8
	1485b440fe92c       a8a176a5d5d69       5 minutes ago        Exited              etcd                      0                   044bafd02d44a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2023-01-14 10:31:57 UTC, end at Sat 2023-01-14 10:34:20 UTC. --
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.041737219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.041752786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.042146572Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0 pid=1649 runtime=io.containerd.runc.v2
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.151127866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlcll,Uid:91e05737-5cbf-404c-8b7c-75045f584885,Namespace:kube-system,Attempt:1,} returns sandbox id \"872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0\""
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.155190898Z" level=info msg="CreateContainer within sandbox \"872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.241053055Z" level=info msg="CreateContainer within sandbox \"872946e88e635e99055da3c3b2a91abc7bee98872cf906742630898d4ada7ea0\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4\""
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.242016429Z" level=info msg="StartContainer for \"5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4\""
	Jan 14 10:32:23 multinode-102822 containerd[386]: time="2023-01-14T10:32:23.384609845Z" level=info msg="StartContainer for \"5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4\" returns successfully"
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.630496668Z" level=info msg="shim disconnected" id=15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.630560718Z" level=warning msg="cleaning up after shim disconnected" id=15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2 namespace=k8s.io
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.630572509Z" level=info msg="cleaning up dead shim"
	Jan 14 10:32:52 multinode-102822 containerd[386]: time="2023-01-14T10:32:52.638898216Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:32:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1966 runtime=io.containerd.runc.v2\n"
	Jan 14 10:32:53 multinode-102822 containerd[386]: time="2023-01-14T10:32:53.459305637Z" level=info msg="RemoveContainer for \"8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f\""
	Jan 14 10:32:53 multinode-102822 containerd[386]: time="2023-01-14T10:32:53.464363070Z" level=info msg="RemoveContainer for \"8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f\" returns successfully"
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.278195346Z" level=info msg="CreateContainer within sandbox \"2d7e3cdf993560349ea2320d065ed8ccc5feaa8a7e03948691e6a54662fc2a78\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.304562736Z" level=info msg="CreateContainer within sandbox \"2d7e3cdf993560349ea2320d065ed8ccc5feaa8a7e03948691e6a54662fc2a78\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406\""
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.305168429Z" level=info msg="StartContainer for \"ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406\""
	Jan 14 10:33:04 multinode-102822 containerd[386]: time="2023-01-14T10:33:04.377037261Z" level=info msg="StartContainer for \"ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406\" returns successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172204300Z" level=info msg="StopPodSandbox for \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\""
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172310091Z" level=info msg="TearDown network for sandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172342365Z" level=info msg="StopPodSandbox for \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" returns successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172766315Z" level=info msg="RemovePodSandbox for \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\""
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172806908Z" level=info msg="Forcibly stopping sandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\""
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.172883998Z" level=info msg="TearDown network for sandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" successfully"
	Jan 14 10:33:17 multinode-102822 containerd[386]: time="2023-01-14T10:33:17.177614484Z" level=info msg="RemovePodSandbox \"ae6051669b431987410648582674ac662c848233477c6028f3deaf2d450faf88\" returns successfully"
	
	* 
	* ==> coredns [a2bee209de6eb085860890c15e08d7f808c0f0609cb757aa0869e3c82e6984f4] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 74073c0c68a507b50ca81d319bd4852e1242323807443dc549ab9f2fb21c8587977d5d9a7ecbfada54b5ff45c9b40d98fc730bfb6641b1b669d8fa8e6e9cea7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [dd932b8cd4a0218597006a6f69a955260d91eac78339b0c19228753c56330653] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 74073c0c68a507b50ca81d319bd4852e1242323807443dc549ab9f2fb21c8587977d5d9a7ecbfada54b5ff45c9b40d98fc730bfb6641b1b669d8fa8e6e9cea7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-102822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=multinode-102822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_28_43_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-102822
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:34:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:32:21 +0000   Sat, 14 Jan 2023 10:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-102822
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                d8105840-a25f-4140-bd3c-c1b0fe6228a7
	  Boot ID:                    3b63a5ae-0a73-415b-af74-fb930cc7c08b
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-2hdwz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-565d847f94-f5dzh                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m25s
	  kube-system                 etcd-multinode-102822                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m38s
	  kube-system                 kindnet-zm4vf                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m26s
	  kube-system                 kube-apiserver-multinode-102822             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-controller-manager-multinode-102822    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-proxy-qlcll                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-scheduler-multinode-102822             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m24s                  kube-proxy       
	  Normal  Starting                 116s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m45s (x4 over 5m45s)  kubelet          Node multinode-102822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m45s (x4 over 5m45s)  kubelet          Node multinode-102822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x4 over 5m45s)  kubelet          Node multinode-102822 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     5m38s                  kubelet          Node multinode-102822 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m38s                  kubelet          Node multinode-102822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s                  kubelet          Node multinode-102822 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m38s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m26s                  node-controller  Node multinode-102822 event: Registered Node multinode-102822 in Controller
	  Normal  NodeReady                5m18s                  kubelet          Node multinode-102822 status is now: NodeReady
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)    kubelet          Node multinode-102822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)    kubelet          Node multinode-102822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)    kubelet          Node multinode-102822 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                   node-controller  Node multinode-102822 event: Registered Node multinode-102822 in Controller
	
	
	Name:               multinode-102822-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102822-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:33:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-102822-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:34:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:34:19 +0000   Sat, 14 Jan 2023 10:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:34:19 +0000   Sat, 14 Jan 2023 10:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:34:19 +0000   Sat, 14 Jan 2023 10:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:34:19 +0000   Sat, 14 Jan 2023 10:34:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-102822-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                82c44291-a31f-43c5-972f-5aaaac290f21
	  Boot ID:                    3b63a5ae-0a73-415b-af74-fb930cc7c08b
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-tch5p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-bwgvn               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m53s
	  kube-system                 kube-proxy-4d5n6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 4m44s                 kube-proxy       
	  Normal  Starting                 62s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m53s (x8 over 5m5s)  kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x8 over 5m5s)  kubelet          Node multinode-102822-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 83s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  83s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    82s (x4 over 83s)     kubelet          Node multinode-102822-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x4 over 83s)     kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  81s (x5 over 83s)     kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             22s                   node-controller  Node multinode-102822-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  1s (x8 over 14s)      kubelet          Node multinode-102822-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    1s (x8 over 14s)      kubelet          Node multinode-102822-m02 status is now: NodeHasNoDiskPressure
	
	* 
	* ==> dmesg <==
	* [  +4.654706] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=0000000092f5ea2a{9p.inode} n=00000000eb6e0172
	[  +0.007355] FS-Cache: O-key=[8] '87a00f0200000000'
	[  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006684] FS-Cache: N-cookie d=0000000092f5ea2a{9p.inode} n=00000000e1d5d334
	[  +0.008751] FS-Cache: N-key=[8] '87a00f0200000000'
	[  +0.343217] FS-Cache: Duplicate cookie detected
	[  +0.004671] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=0000000092f5ea2a{9p.inode} n=00000000d68b2e5d
	[  +0.007346] FS-Cache: O-key=[8] '91a00f0200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008041] FS-Cache: N-cookie d=0000000092f5ea2a{9p.inode} n=00000000e2277570
	[  +0.008761] FS-Cache: N-key=[8] '91a00f0200000000'
	[Jan14 10:21] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan14 10:32] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000006] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +1.007675] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000005] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +2.011857] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000006] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +4.031727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000030] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +8.195348] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000007] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	
	* 
	* ==> etcd [1485b440fe92caa2d6576d2a8fc48c927ba4b2a5dfe68a214d1b37227e5c1da2] <==
	* {"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:28:36.555Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:37.246Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-102822 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:28:37.247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:28:37.248Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:28:37.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"warn","ts":"2023-01-14T10:29:11.089Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.668881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2023-01-14T10:29:11.089Z","caller":"traceutil/trace.go:171","msg":"trace[1315289163] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:416; }","duration":"183.911052ms","start":"2023-01-14T10:29:10.905Z","end":"2023-01-14T10:29:11.089Z","steps":["trace[1315289163] 'agreement among raft nodes before linearized reading'  (duration: 58.269131ms)","trace[1315289163] 'range keys from in-memory index tree'  (duration: 125.354624ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-14T10:30:00.966Z","caller":"traceutil/trace.go:171","msg":"trace[654694052] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"115.519919ms","start":"2023-01-14T10:30:00.851Z","end":"2023-01-14T10:30:00.966Z","steps":["trace[654694052] 'process raft request'  (duration: 54.574474ms)","trace[654694052] 'compare'  (duration: 60.838552ms)"],"step_count":2}
	
	* 
	* ==> etcd [b38b962e79724d6f1c63d4dc3d78a13f8cb401220c7a537a8db80bdf0b792460] <==
	* {"level":"info","ts":"2023-01-14T10:32:18.224Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T10:32:18.224Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:32:18.225Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:32:18.227Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:32:18.228Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-102822 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:32:19.154Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:32:19.155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:32:19.155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:32:19.156Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:32:19.156Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  10:34:20 up  1:16,  0 users,  load average: 0.68, 0.79, 0.75
	Linux multinode-102822 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22] <==
	* I0114 10:28:39.505632       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:28:39.520109       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:28:39.520322       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:28:39.520469       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:28:39.520616       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:28:39.520692       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:28:39.531912       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:28:40.207596       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:28:40.409513       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0114 10:28:40.412238       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0114 10:28:40.412261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:28:40.717534       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:28:40.744085       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:28:40.850271       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0114 10:28:40.857124       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0114 10:28:40.857936       1 controller.go:616] quota admission added evaluator for: endpoints
	I0114 10:28:40.861414       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 10:28:41.450119       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:28:42.049560       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:28:42.055584       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0114 10:28:42.062475       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:28:42.134960       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:28:54.905735       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0114 10:28:54.905735       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0114 10:28:55.106481       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [9f38caa8f201e108a7701a4a1d039d002613a957829d78fb98ff79df81ed0c18] <==
	* I0114 10:32:21.015075       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:32:21.021493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:32:21.030461       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0114 10:32:21.030484       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0114 10:32:21.030513       1 apf_controller.go:300] Starting API Priority and Fairness config controller
	I0114 10:32:21.030738       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:32:21.031056       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0114 10:32:21.042543       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0114 10:32:21.044284       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:32:21.120718       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:32:21.121958       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:32:21.122448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:32:21.122475       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:32:21.122486       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:32:21.130545       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:32:21.130620       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:32:21.805971       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:32:22.017251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:32:23.570545       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:32:23.668198       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:32:23.677305       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:32:23.726354       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:32:23.730823       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:32:33.329600       1 controller.go:616] quota admission added evaluator for: endpoints
	I0114 10:32:33.416103       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [1cb572f06fea415b6978f65908823610f0231c865b673086dc3a00a558a94028] <==
	* I0114 10:28:55.350784       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-k8hm7"
	I0114 10:29:04.205370       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0114 10:29:27.954465       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m02" does not exist
	I0114 10:29:27.958710       1 range_allocator.go:367] Set node multinode-102822-m02 PodCIDR to [10.244.1.0/24]
	I0114 10:29:27.962127       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4d5n6"
	I0114 10:29:27.968576       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bwgvn"
	W0114 10:29:29.208844       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m02. Assuming now as a timestamp.
	I0114 10:29:29.208925       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m02 event: Registered Node multinode-102822-m02 in Controller"
	W0114 10:29:48.543111       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	I0114 10:29:50.951973       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-65db55d5d6 to 2"
	I0114 10:29:50.957409       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-jth2v"
	I0114 10:29:50.961037       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-2hdwz"
	W0114 10:30:20.094854       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:30:20.094902       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m03" does not exist
	I0114 10:30:20.106934       1 range_allocator.go:367] Set node multinode-102822-m03 PodCIDR to [10.244.2.0/24]
	I0114 10:30:20.107450       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bzd24"
	I0114 10:30:20.107474       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fb2ng"
	W0114 10:30:24.222079       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m03. Assuming now as a timestamp.
	I0114 10:30:24.222142       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m03 event: Registered Node multinode-102822-m03 in Controller"
	W0114 10:30:26.502257       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:31:01.364313       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:31:02.150607       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	W0114 10:31:02.150838       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m03" does not exist
	I0114 10:31:02.154611       1 range_allocator.go:367] Set node multinode-102822-m03 PodCIDR to [10.244.3.0/24]
	W0114 10:31:12.292377       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	
	* 
	* ==> kube-controller-manager [2d6d8d7cecf8f4cf0dc5ea9dc307b93e538956656bc46cdb02f8f51ec9f02109] <==
	* W0114 10:32:33.502910       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-102822-m03. Assuming now as a timestamp.
	I0114 10:32:33.502934       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0114 10:32:33.502976       1 event.go:294] "Event occurred" object="multinode-102822" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822 event: Registered Node multinode-102822 in Controller"
	I0114 10:32:33.502997       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m02 event: Registered Node multinode-102822-m02 in Controller"
	I0114 10:32:33.503009       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-102822-m03 event: Registered Node multinode-102822-m03 in Controller"
	I0114 10:32:33.510031       1 shared_informer.go:262] Caches are synced for daemon sets
	I0114 10:32:33.525304       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:32:33.843127       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:32:33.873339       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:32:33.873365       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:33:13.514508       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102822-m03 status is now: NodeNotReady"
	W0114 10:33:13.514511       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	I0114 10:33:13.520444       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-bzd24" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.526157       1 event.go:294] "Event occurred" object="kube-system/kindnet-fb2ng" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.530639       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102822-m02 status is now: NodeNotReady"
	I0114 10:33:13.535978       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-4d5n6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.540620       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-jth2v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:13.546008       1 event.go:294] "Event occurred" object="kube-system/kindnet-bwgvn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:15.906483       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-tch5p"
	W0114 10:33:17.990591       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102822-m02" does not exist
	W0114 10:33:17.990638       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	I0114 10:33:17.996925       1 range_allocator.go:367] Set node multinode-102822-m02 PodCIDR to [10.244.1.0/24]
	I0114 10:33:58.556555       1 event.go:294] "Event occurred" object="multinode-102822-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102822-m02 status is now: NodeNotReady"
	I0114 10:34:18.559363       1 event.go:294] "Event occurred" object="multinode-102822-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-102822-m03 event: Removing Node multinode-102822-m03 from Controller"
	W0114 10:34:19.038055       1 topologycache.go:199] Can't get CPU or zone information for multinode-102822-m02 node
	
	* 
	* ==> kube-proxy [5dd8ac1a359683c5c399c317dda1711ec1312f6e08d11bf915214f05f07411e4] <==
	* I0114 10:32:23.450353       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0114 10:32:23.450451       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0114 10:32:23.450519       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:32:23.470509       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:32:23.470546       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:32:23.470555       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:32:23.470567       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:32:23.470588       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:32:23.470746       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:32:23.470991       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:32:23.471008       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:32:23.471559       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:32:23.471589       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:32:23.471598       1 config.go:444] "Starting node config controller"
	I0114 10:32:23.471608       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:32:23.471637       1 config.go:317] "Starting service config controller"
	I0114 10:32:23.471648       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:32:23.571802       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:32:23.571834       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:32:23.571834       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [6391dbda6c818a8bb3cbed072474e9617857621f10d73d3f52cb1b1f92bfbad2] <==
	* I0114 10:28:55.524702       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0114 10:28:55.524799       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0114 10:28:55.524828       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:28:55.548489       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:28:55.548526       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:28:55.548536       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:28:55.548548       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:28:55.548568       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:28:55.548740       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:28:55.548989       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:28:55.549002       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:28:55.549468       1 config.go:317] "Starting service config controller"
	I0114 10:28:55.549493       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:28:55.549528       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:28:55.549539       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:28:55.549530       1 config.go:444] "Starting node config controller"
	I0114 10:28:55.549556       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:28:55.649631       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:28:55.649672       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:28:55.649701       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [72632975127012fabc025a87c9f6dbf8c26439c85dff24c5d52930dd875149bf] <==
	* W0114 10:28:39.527575       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0114 10:28:39.527621       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0114 10:28:39.527575       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:39.527663       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:39.527666       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 10:28:39.527716       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 10:28:39.527773       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:39.527803       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:39.527825       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:39.527808       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:39.527939       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:39.527980       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:40.392746       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:40.392785       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 10:28:40.488188       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0114 10:28:40.488231       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0114 10:28:40.492144       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0114 10:28:40.492170       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:40.578843       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:40.578871       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:40.585826       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0114 10:28:40.585854       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:28:40.609956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:40.609985       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0114 10:28:43.525198       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fcb2330166522f9111ac906a8126009b1eb9533cf1305abd799e3399b7e38d65] <==
	* I0114 10:32:18.828989       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:32:21.039797       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:32:21.039833       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0114 10:32:21.039855       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:32:21.039865       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:32:21.127447       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:32:21.127476       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:32:21.128802       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:32:21.128832       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:32:21.128904       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:32:21.128937       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:32:21.229818       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:31:57 UTC, end at Sat 2023-01-14 10:34:20 UTC. --
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242328     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e664ab33-93db-45e5-a147-81c14dd05837-cni-cfg\") pod \"kindnet-zm4vf\" (UID: \"e664ab33-93db-45e5-a147-81c14dd05837\") " pod="kube-system/kindnet-zm4vf"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242347     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e664ab33-93db-45e5-a147-81c14dd05837-lib-modules\") pod \"kindnet-zm4vf\" (UID: \"e664ab33-93db-45e5-a147-81c14dd05837\") " pod="kube-system/kindnet-zm4vf"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242430     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91e05737-5cbf-404c-8b7c-75045f584885-kube-proxy\") pod \"kube-proxy-qlcll\" (UID: \"91e05737-5cbf-404c-8b7c-75045f584885\") " pod="kube-system/kube-proxy-qlcll"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242475     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb-config-volume\") pod \"coredns-565d847f94-f5dzh\" (UID: \"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb\") " pod="kube-system/coredns-565d847f94-f5dzh"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242529     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ae50847f-5144-4e4b-a340-5cbd0bbb55a2-tmp\") pod \"storage-provisioner\" (UID: \"ae50847f-5144-4e4b-a340-5cbd0bbb55a2\") " pod="kube-system/storage-provisioner"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242561     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e05737-5cbf-404c-8b7c-75045f584885-lib-modules\") pod \"kube-proxy-qlcll\" (UID: \"91e05737-5cbf-404c-8b7c-75045f584885\") " pod="kube-system/kube-proxy-qlcll"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242601     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6v9k\" (UniqueName: \"kubernetes.io/projected/7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb-kube-api-access-v6v9k\") pod \"coredns-565d847f94-f5dzh\" (UID: \"7e222fa8-4297-4bf5-b6b0-fbd1a96e85eb\") " pod="kube-system/coredns-565d847f94-f5dzh"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242625     767 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e664ab33-93db-45e5-a147-81c14dd05837-xtables-lock\") pod \"kindnet-zm4vf\" (UID: \"e664ab33-93db-45e5-a147-81c14dd05837\") " pod="kube-system/kindnet-zm4vf"
	Jan 14 10:32:21 multinode-102822 kubelet[767]: I0114 10:32:21.242671     767 reconciler.go:169] "Reconciler: start to sync state"
	Jan 14 10:32:22 multinode-102822 kubelet[767]: I0114 10:32:22.364716     767 request.go:682] Waited for 1.020652354s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Jan 14 10:32:23 multinode-102822 kubelet[767]: I0114 10:32:23.360444     767 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 14 10:32:27 multinode-102822 kubelet[767]: E0114 10:32:27.342361     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:27 multinode-102822 kubelet[767]: E0114 10:32:27.342409     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:32:37 multinode-102822 kubelet[767]: E0114 10:32:37.360656     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:37 multinode-102822 kubelet[767]: E0114 10:32:37.360716     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:32:47 multinode-102822 kubelet[767]: E0114 10:32:47.379514     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:47 multinode-102822 kubelet[767]: E0114 10:32:47.379579     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:32:53 multinode-102822 kubelet[767]: I0114 10:32:53.457979     767 scope.go:115] "RemoveContainer" containerID="8d47adfb5779e4983b862bf753ae6e9d348d57436960c00b03b9d43642535c7f"
	Jan 14 10:32:53 multinode-102822 kubelet[767]: I0114 10:32:53.458307     767 scope.go:115] "RemoveContainer" containerID="15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2"
	Jan 14 10:32:53 multinode-102822 kubelet[767]: E0114 10:32:53.458566     767 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae50847f-5144-4e4b-a340-5cbd0bbb55a2)\"" pod="kube-system/storage-provisioner" podUID=ae50847f-5144-4e4b-a340-5cbd0bbb55a2
	Jan 14 10:32:57 multinode-102822 kubelet[767]: E0114 10:32:57.394440     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:32:57 multinode-102822 kubelet[767]: E0114 10:32:57.394487     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jan 14 10:33:04 multinode-102822 kubelet[767]: I0114 10:33:04.275899     767 scope.go:115] "RemoveContainer" containerID="15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2"
	Jan 14 10:33:07 multinode-102822 kubelet[767]: E0114 10:33:07.412188     767 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jan 14 10:33:07 multinode-102822 kubelet[767]: E0114 10:33:07.412232     767 helpers.go:672] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	
	* 
	* ==> storage-provisioner [15ee53e6aba5d53c3c76c430f4cfe9da412f77a304261777bac3bea4b1445cd2] <==
	* I0114 10:32:22.605792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0114 10:32:52.608038       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [ff74c8ad1795d903215a7c1924fa2e413428b24a1ed3c89df3f21e46881dd406] <==
	* I0114 10:33:04.384355       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:33:04.390694       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:33:04.390738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:33:21.786224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:33:21.786294       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00795c43-9e4e-4929-bf49-b21bb407b065", APIVersion:"v1", ResourceVersion:"872", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-102822_20747e10-56a4-4e23-b77f-fb34ebcbf6d9 became leader
	I0114 10:33:21.786344       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-102822_20747e10-56a4-4e23-b77f-fb34ebcbf6d9!
	I0114 10:33:21.886858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-102822_20747e10-56a4-4e23-b77f-fb34ebcbf6d9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-102822 -n multinode-102822
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-102822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-tch5p
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/DeleteNode]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-102822 describe pod busybox-65db55d5d6-tch5p
helpers_test.go:280: (dbg) kubectl --context multinode-102822 describe pod busybox-65db55d5d6-tch5p:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-tch5p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-102822-m02/
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jkptf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-jkptf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  65s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  64s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Normal   Scheduled         62s   default-scheduler  Successfully assigned default/busybox-65db55d5d6-tch5p to multinode-102822-m02

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (6.68s)

                                                
                                    
x
+
TestPreload (371.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-103656 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-103656 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (52.058115415s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-103656 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-103656 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.338232887s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-103656 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6
E0114 10:37:53.110110   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:38:33.933903   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:40:27.808380   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:41:50.854117   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:42:53.110615   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-103656 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m13.664842744s)

                                                
                                                
-- stdout --
	* [test-preload-103656] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node test-preload-103656 in cluster test-preload-103656
	* Pulling base image ...
	* Downloading Kubernetes v1.24.6 preload ...
	* Updating the running docker "test-preload-103656" container ...
	* Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
	* Configuring CNI (Container Networking Interface) ...
	X Problems detected in kubelet:
	  Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050446    4197 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	  Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050472    4197 projected.go:192] Error preparing data for projected volume kube-api-access-8zqx7 for pod kube-system/kindnet-dvmsq: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	  Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050496    4197 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:37:51.002864  130375 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:37:51.002987  130375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:37:51.002999  130375 out.go:309] Setting ErrFile to fd 2...
	I0114 10:37:51.003007  130375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:37:51.003477  130375 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:37:51.004152  130375 out.go:303] Setting JSON to false
	I0114 10:37:51.005372  130375 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4818,"bootTime":1673687853,"procs":502,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:37:51.005442  130375 start.go:135] virtualization: kvm guest
	I0114 10:37:51.007911  130375 out.go:177] * [test-preload-103656] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:37:51.009298  130375 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:37:51.009248  130375 notify.go:220] Checking for updates...
	I0114 10:37:51.011741  130375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:37:51.012942  130375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:37:51.014056  130375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:37:51.015244  130375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:37:51.016719  130375 config.go:180] Loaded profile config "test-preload-103656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0114 10:37:51.018319  130375 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0114 10:37:51.019428  130375 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:37:51.047829  130375 docker.go:138] docker version: linux-20.10.22
	I0114 10:37:51.047971  130375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:37:51.142168  130375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-14 10:37:51.068287951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:37:51.142272  130375 docker.go:255] overlay module found
	I0114 10:37:51.145161  130375 out.go:177] * Using the docker driver based on existing profile
	I0114 10:37:51.146398  130375 start.go:294] selected driver: docker
	I0114 10:37:51.146412  130375 start.go:838] validating driver "docker" against &{Name:test-preload-103656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103656 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:37:51.146499  130375 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:37:51.147317  130375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:37:51.239425  130375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-14 10:37:51.166649377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:37:51.239717  130375 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:37:51.239746  130375 cni.go:95] Creating CNI manager for ""
	I0114 10:37:51.239754  130375 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:37:51.239769  130375 start_flags.go:319] config:
	{Name:test-preload-103656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-103656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:37:51.242659  130375 out.go:177] * Starting control plane node test-preload-103656 in cluster test-preload-103656
	I0114 10:37:51.243956  130375 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:37:51.245211  130375 out.go:177] * Pulling base image ...
	I0114 10:37:51.246721  130375 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:37:51.246761  130375 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:37:51.270626  130375 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:37:51.270649  130375 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:37:51.506522  130375 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0114 10:37:51.506551  130375 cache.go:57] Caching tarball of preloaded images
	I0114 10:37:51.506853  130375 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:37:51.508905  130375 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I0114 10:37:51.510171  130375 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:37:51.608881  130375 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0114 10:38:05.447131  130375 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:38:05.447223  130375 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:38:06.329607  130375 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I0114 10:38:06.329763  130375 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/config.json ...
	I0114 10:38:06.329967  130375 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:38:06.330010  130375 start.go:364] acquiring machines lock for test-preload-103656: {Name:mkd8e957d814494daa90511121940947d1656700 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:38:06.330125  130375 start.go:368] acquired machines lock for "test-preload-103656" in 78.365µs
	I0114 10:38:06.330141  130375 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:38:06.330147  130375 fix.go:55] fixHost starting: 
	I0114 10:38:06.330340  130375 cli_runner.go:164] Run: docker container inspect test-preload-103656 --format={{.State.Status}}
	I0114 10:38:06.354371  130375 fix.go:103] recreateIfNeeded on test-preload-103656: state=Running err=<nil>
	W0114 10:38:06.354400  130375 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:38:06.356629  130375 out.go:177] * Updating the running docker "test-preload-103656" container ...
	I0114 10:38:06.357999  130375 machine.go:88] provisioning docker machine ...
	I0114 10:38:06.358031  130375 ubuntu.go:169] provisioning hostname "test-preload-103656"
	I0114 10:38:06.358081  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.381331  130375 main.go:134] libmachine: Using SSH client type: native
	I0114 10:38:06.381503  130375 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0114 10:38:06.381521  130375 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-103656 && echo "test-preload-103656" | sudo tee /etc/hostname
	I0114 10:38:06.507868  130375 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-103656
	
	I0114 10:38:06.507942  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.531561  130375 main.go:134] libmachine: Using SSH client type: native
	I0114 10:38:06.531797  130375 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0114 10:38:06.531820  130375 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-103656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-103656/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-103656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:38:06.651516  130375 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:38:06.651548  130375 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:38:06.651570  130375 ubuntu.go:177] setting up certificates
	I0114 10:38:06.651579  130375 provision.go:83] configureAuth start
	I0114 10:38:06.651623  130375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-103656
	I0114 10:38:06.674255  130375 provision.go:138] copyHostCerts
	I0114 10:38:06.674313  130375 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:38:06.674322  130375 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:38:06.674383  130375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:38:06.674501  130375 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:38:06.674511  130375 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:38:06.674534  130375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:38:06.674580  130375 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:38:06.674587  130375 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:38:06.674606  130375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:38:06.674647  130375 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.test-preload-103656 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-103656]
	I0114 10:38:06.758438  130375 provision.go:172] copyRemoteCerts
	I0114 10:38:06.758498  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:38:06.758531  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.781782  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:06.867004  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0114 10:38:06.883984  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:38:06.900643  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:38:06.916980  130375 provision.go:86] duration metric: configureAuth took 265.387379ms
	I0114 10:38:06.917007  130375 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:38:06.917175  130375 config.go:180] Loaded profile config "test-preload-103656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I0114 10:38:06.917189  130375 machine.go:91] provisioned docker machine in 559.172216ms
	I0114 10:38:06.917197  130375 start.go:300] post-start starting for "test-preload-103656" (driver="docker")
	I0114 10:38:06.917204  130375 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:38:06.917243  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:38:06.917284  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.939506  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.022998  130375 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:38:07.025772  130375 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:38:07.025799  130375 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:38:07.025807  130375 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:38:07.025812  130375 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:38:07.025820  130375 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:38:07.025870  130375 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:38:07.025928  130375 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:38:07.025999  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:38:07.032480  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:38:07.049250  130375 start.go:303] post-start completed in 132.037948ms
	I0114 10:38:07.049327  130375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:38:07.049372  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:07.072552  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.152367  130375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:38:07.156222  130375 fix.go:57] fixHost completed within 826.071854ms
	I0114 10:38:07.156243  130375 start.go:83] releasing machines lock for "test-preload-103656", held for 826.105821ms
	I0114 10:38:07.156316  130375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-103656
	I0114 10:38:07.178629  130375 ssh_runner.go:195] Run: cat /version.json
	I0114 10:38:07.178671  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:07.178723  130375 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 10:38:07.178770  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:07.204106  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.205840  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.306907  130375 ssh_runner.go:195] Run: systemctl --version
	I0114 10:38:07.310615  130375 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:38:07.322602  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:38:07.331962  130375 docker.go:189] disabling docker service ...
	I0114 10:38:07.332014  130375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:38:07.341188  130375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:38:07.349832  130375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:38:07.452797  130375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:38:07.546759  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:38:07.555705  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:38:07.567582  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0114 10:38:07.575254  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:38:07.583779  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:38:07.591757  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:38:07.599621  130375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:38:07.605752  130375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:38:07.611809  130375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:38:07.712619  130375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:38:07.790522  130375 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:38:07.790586  130375 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:38:07.794469  130375 start.go:472] Will wait 60s for crictl version
	I0114 10:38:07.794530  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:07.797517  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:38:07.822052  130375 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:38:07Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:38:18.871493  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:38:18.894957  130375 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:38:18.895006  130375 ssh_runner.go:195] Run: containerd --version
	I0114 10:38:18.917350  130375 ssh_runner.go:195] Run: containerd --version
	I0114 10:38:18.941262  130375 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
	I0114 10:38:18.942722  130375 cli_runner.go:164] Run: docker network inspect test-preload-103656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:38:18.964764  130375 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0114 10:38:18.968124  130375 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:38:18.968201  130375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:38:18.990809  130375 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I0114 10:38:18.990864  130375 ssh_runner.go:195] Run: which lz4
	I0114 10:38:18.993842  130375 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0114 10:38:18.996676  130375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0114 10:38:18.996704  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I0114 10:38:19.887901  130375 containerd.go:496] Took 0.894080 seconds to copy over tarball
	I0114 10:38:19.887965  130375 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0114 10:38:22.578454  130375 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690461959s)
	I0114 10:38:22.578487  130375 containerd.go:503] Took 2.690557 seconds t extract the tarball
	I0114 10:38:22.578499  130375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0114 10:38:22.596750  130375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:38:22.700364  130375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:38:22.772561  130375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:38:22.801353  130375 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0114 10:38:22.801435  130375 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:22.801455  130375 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:22.801476  130375 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:22.801498  130375 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:22.801484  130375 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:22.801456  130375 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:22.801583  130375 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:22.801566  130375 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0114 10:38:22.802709  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:22.802715  130375 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:22.802722  130375 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:22.802737  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:22.802715  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:22.802770  130375 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0114 10:38:22.802853  130375 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:22.802932  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:23.243056  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0114 10:38:23.248174  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0114 10:38:23.270429  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0114 10:38:23.286046  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I0114 10:38:23.306009  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I0114 10:38:23.352775  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I0114 10:38:23.366810  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I0114 10:38:23.638648  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:38:24.020996  130375 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0114 10:38:24.021227  130375 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0114 10:38:24.021169  130375 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0114 10:38:24.021313  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.021332  130375 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:24.021391  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.128609  130375 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0114 10:38:24.128660  130375 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:24.128706  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.147374  130375 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I0114 10:38:24.147425  130375 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:24.147463  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.222049  130375 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I0114 10:38:24.222094  130375 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:24.222135  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.252508  130375 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I0114 10:38:24.252563  130375 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:24.252614  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.252621  130375 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I0114 10:38:24.252661  130375 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:24.252703  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.350906  130375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0114 10:38:24.350931  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0114 10:38:24.350955  130375 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:24.350994  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.350995  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:24.351085  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:24.351175  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:24.351214  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:24.351282  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:24.351321  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:26.457836  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (2.10675309s)
	I0114 10:38:26.457871  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0114 10:38:26.457879  130375 ssh_runner.go:235] Completed: which crictl: (2.106855647s)
	I0114 10:38:26.457914  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (2.106963875s)
	I0114 10:38:26.457929  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0114 10:38:26.457934  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:26.457932  130375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0114 10:38:26.457991  130375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0114 10:38:26.457987  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (2.106883834s)
	I0114 10:38:26.458010  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0114 10:38:26.458032  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (2.106838738s)
	I0114 10:38:26.458042  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I0114 10:38:26.458044  130375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0114 10:38:26.458061  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (2.106824178s)
	I0114 10:38:26.458075  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I0114 10:38:26.458138  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (2.10676597s)
	I0114 10:38:26.458154  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6: (2.106848252s)
	I0114 10:38:26.458160  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I0114 10:38:26.458161  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I0114 10:38:26.487938  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0114 10:38:26.487965  130375 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0114 10:38:26.488021  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0114 10:38:26.488066  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0114 10:38:26.488140  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0114 10:38:26.488171  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0114 10:38:26.488226  130375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:38:32.755504  130375 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (6.267452686s)
	I0114 10:38:32.755578  130375 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I0114 10:38:32.755651  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0114 10:38:32.755582  130375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (6.267329372s)
	I0114 10:38:32.755716  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0114 10:38:33.756342  130375 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7: (1.000655944s)
	I0114 10:38:33.756374  130375 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0114 10:38:33.756397  130375 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0114 10:38:33.756437  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0114 10:38:34.834072  130375 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.077603132s)
	I0114 10:38:34.834103  130375 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0114 10:38:34.834135  130375 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:38:34.834187  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:38:35.234947  130375 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0114 10:38:35.235008  130375 cache_images.go:92] LoadImages completed in 12.433632171s
	W0114 10:38:35.235189  130375 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
	I0114 10:38:35.235251  130375 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:38:35.261888  130375 cni.go:95] Creating CNI manager for ""
	I0114 10:38:35.261913  130375 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:38:35.261928  130375 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:38:35.261947  130375 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-103656 NodeName:test-preload-103656 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:38:35.262107  130375 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-103656"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:38:35.262227  130375 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-103656 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-103656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:38:35.262275  130375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I0114 10:38:35.270359  130375 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:38:35.270436  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:38:35.323452  130375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I0114 10:38:35.339112  130375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:38:35.354377  130375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0114 10:38:35.368005  130375 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:38:35.371353  130375 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656 for IP: 192.168.67.2
	I0114 10:38:35.371467  130375 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:38:35.371525  130375 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:38:35.371624  130375 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/client.key
	I0114 10:38:35.371732  130375 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/apiserver.key.c7fa3a9e
	I0114 10:38:35.371782  130375 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/proxy-client.key
	I0114 10:38:35.371906  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:38:35.371945  130375 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:38:35.371959  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:38:35.371990  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:38:35.372028  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:38:35.372073  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:38:35.372128  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:38:35.372989  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:38:35.438659  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:38:35.459078  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:38:35.521122  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 10:38:35.543193  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:38:35.565246  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:38:35.636400  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:38:35.658645  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:38:35.679179  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:38:35.736936  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:38:35.756587  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:38:35.776144  130375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:38:35.829677  130375 ssh_runner.go:195] Run: openssl version
	I0114 10:38:35.835452  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:38:35.844615  130375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:38:35.848677  130375 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:38:35.848736  130375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:38:35.854366  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:38:35.862771  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:38:35.871237  130375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:38:35.874979  130375 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:38:35.875031  130375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:38:35.925670  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:38:35.934438  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:38:35.943308  130375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:38:35.946992  130375 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:38:35.947066  130375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:38:35.952914  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:38:35.960823  130375 kubeadm.go:396] StartCluster: {Name:test-preload-103656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-103656 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:38:35.960927  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:38:35.960969  130375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:38:36.030809  130375 cri.go:87] found id: "79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1"
	I0114 10:38:36.030836  130375 cri.go:87] found id: "1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a"
	I0114 10:38:36.030848  130375 cri.go:87] found id: ""
	I0114 10:38:36.030901  130375 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0114 10:38:36.069795  130375 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e","pid":3374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e/rootfs","created":"2023-01-14T10:38:26.038691683Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-103656_b5ce6aaa8feda8e03d2ece032691651a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-103656","io.kuber
netes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","pid":3389,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608/rootfs","created":"2023-01-14T10:38:26.042050411Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_0c70afb0-95ab-4b58-84ba-92f0658d439b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.c
ri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a","pid":3438,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a/rootfs","created":"2023-01-14T10:38:26.268614811Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f","pid":3619,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f/rootfs","created":"2023-01-14T10:38:27.25000565Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-d9dwj_f015d1c6-ed2c-4a5c-89ca-3aa07dc45194","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-d9dwj","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896","pid":33
75,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896/rootfs","created":"2023-01-14T10:38:26.043120078Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-103656_cf327063d0646260216423cc46e62e98","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573
599bfde8b13","pid":2193,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13/rootfs","created":"2023-01-14T10:37:38.323610353Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jnf8g_c4b1d229-e57c-4245-b94c-5f87340ac132","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-jnf8g","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50",
"pid":2641,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50/rootfs","created":"2023-01-14T10:37:46.231018242Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-d9dwj","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","pid":2582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","rootfs":"/run/containerd/io
.containerd.runtime.v2.task/k8s.io/46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a/rootfs","created":"2023-01-14T10:37:46.123706723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_0c70afb0-95ab-4b58-84ba-92f0658d439b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","pid":1510,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","rootfs":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113/rootfs","created":"2023-01-14T10:37:18.662644331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-103656_b5ce6aaa8feda8e03d2ece032691651a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","rootfs":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea/rootfs","created":"2023-01-14T10:37:18.664453109Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-103656_eed4bab1d5401611f9e0dbfc25eace67","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","rootfs":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab/rootfs","created":"2023-01-14T10:37:18.66445572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-103656_5e3f9e81a771cae657895e0ac9a4db8b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c","pid":2642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79a0a4f98c70307c7c45ff7122be3118ff508b1de
76fdb59984eed1bfaa0784c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c/rootfs","created":"2023-01-14T10:37:46.231183602Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","pid":1513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691/rootfs","created":
"2023-01-14T10:37:18.662850526Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-103656_cf327063d0646260216423cc46e62e98","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","pid":2583,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe038
59/rootfs","created":"2023-01-14T10:37:46.121931291Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-d9dwj_f015d1c6-ed2c-4a5c-89ca-3aa07dc45194","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-d9dwj","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d","pid":1647,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7
950aaa4e348b64d/rootfs","created":"2023-01-14T10:37:18.849530018Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4/rootfs","created":"2023-01-14T10:37:18.840907916Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kuber
netes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21","pid":2228,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21/rootfs","created":"2023-01-14T10:37:38.410230285Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"218b757d1439e7e68e31cb0d83782657befb7c1dd
78f585ff3573599bfde8b13","io.kubernetes.cri.sandbox-name":"kube-proxy-jnf8g","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36","pid":1627,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36/rootfs","created":"2023-01-14T10:37:18.839041607Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","pid":2192,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba/rootfs","created":"2023-01-14T10:37:38.321323425Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dvmsq_bf124d4e-df5c-446c-bb84-adb2312fb0d7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-dvmsq","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id"
:"bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34","pid":3571,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34/rootfs","created":"2023-01-14T10:38:27.052699215Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-103656_eed4bab1d5401611f9e0dbfc25eace67","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f","pid":2449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f/rootfs","created":"2023-01-14T10:37:41.121120967Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","io.kubernetes.cri.sandbox-name":"kindnet-dvmsq","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c/rootfs","created":"2023-01-14T10:38:27.126924717Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dvmsq_bf124d4e-df5c-446c-bb84-adb2312fb0d7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-dvmsq","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22","pid":1654,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfdd23
b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22/rootfs","created":"2023-01-14T10:37:18.849606176Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","io.kubernetes.cri.sandbox-name":"etcd-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901","pid":3371,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901/rootfs","cre
ated":"2023-01-14T10:38:26.042915189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-103656_5e3f9e81a771cae657895e0ac9a4db8b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I0114 10:38:36.070165  130375 cri.go:124] list returned 24 containers
	I0114 10:38:36.070183  130375 cri.go:127] container: {ID:006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e Status:running}
	I0114 10:38:36.070225  130375 cri.go:129] skipping 006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e - not in ps
	I0114 10:38:36.070230  130375 cri.go:127] container: {ID:02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608 Status:running}
	I0114 10:38:36.070243  130375 cri.go:129] skipping 02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608 - not in ps
	I0114 10:38:36.070248  130375 cri.go:127] container: {ID:1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a Status:running}
	I0114 10:38:36.070260  130375 cri.go:133] skipping {1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a running}: state = "running", want "paused"
	I0114 10:38:36.070274  130375 cri.go:127] container: {ID:1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f Status:running}
	I0114 10:38:36.070287  130375 cri.go:129] skipping 1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f - not in ps
	I0114 10:38:36.070297  130375 cri.go:127] container: {ID:1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896 Status:running}
	I0114 10:38:36.070308  130375 cri.go:129] skipping 1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896 - not in ps
	I0114 10:38:36.070313  130375 cri.go:127] container: {ID:218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13 Status:running}
	I0114 10:38:36.070321  130375 cri.go:129] skipping 218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13 - not in ps
	I0114 10:38:36.070327  130375 cri.go:127] container: {ID:3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50 Status:running}
	I0114 10:38:36.070346  130375 cri.go:129] skipping 3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50 - not in ps
	I0114 10:38:36.070356  130375 cri.go:127] container: {ID:46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a Status:running}
	I0114 10:38:36.070364  130375 cri.go:129] skipping 46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a - not in ps
	I0114 10:38:36.070373  130375 cri.go:127] container: {ID:665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113 Status:running}
	I0114 10:38:36.070381  130375 cri.go:129] skipping 665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113 - not in ps
	I0114 10:38:36.070390  130375 cri.go:127] container: {ID:6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea Status:running}
	I0114 10:38:36.070402  130375 cri.go:129] skipping 6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea - not in ps
	I0114 10:38:36.070411  130375 cri.go:127] container: {ID:77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab Status:running}
	I0114 10:38:36.070418  130375 cri.go:129] skipping 77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab - not in ps
	I0114 10:38:36.070428  130375 cri.go:127] container: {ID:79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c Status:running}
	I0114 10:38:36.070435  130375 cri.go:129] skipping 79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c - not in ps
	I0114 10:38:36.070444  130375 cri.go:127] container: {ID:7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691 Status:running}
	I0114 10:38:36.070452  130375 cri.go:129] skipping 7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691 - not in ps
	I0114 10:38:36.070462  130375 cri.go:127] container: {ID:7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859 Status:running}
	I0114 10:38:36.070469  130375 cri.go:129] skipping 7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859 - not in ps
	I0114 10:38:36.070475  130375 cri.go:127] container: {ID:8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d Status:running}
	I0114 10:38:36.070485  130375 cri.go:129] skipping 8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d - not in ps
	I0114 10:38:36.070490  130375 cri.go:127] container: {ID:8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4 Status:running}
	I0114 10:38:36.070497  130375 cri.go:129] skipping 8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4 - not in ps
	I0114 10:38:36.070505  130375 cri.go:127] container: {ID:927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21 Status:running}
	I0114 10:38:36.070523  130375 cri.go:129] skipping 927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21 - not in ps
	I0114 10:38:36.070562  130375 cri.go:127] container: {ID:a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36 Status:running}
	I0114 10:38:36.070581  130375 cri.go:129] skipping a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36 - not in ps
	I0114 10:38:36.070591  130375 cri.go:127] container: {ID:ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba Status:running}
	I0114 10:38:36.070603  130375 cri.go:129] skipping ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba - not in ps
	I0114 10:38:36.070616  130375 cri.go:127] container: {ID:bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34 Status:running}
	I0114 10:38:36.070626  130375 cri.go:129] skipping bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34 - not in ps
	I0114 10:38:36.070635  130375 cri.go:127] container: {ID:c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f Status:running}
	I0114 10:38:36.070645  130375 cri.go:129] skipping c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f - not in ps
	I0114 10:38:36.070654  130375 cri.go:127] container: {ID:d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c Status:running}
	I0114 10:38:36.070665  130375 cri.go:129] skipping d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c - not in ps
	I0114 10:38:36.070675  130375 cri.go:127] container: {ID:dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22 Status:running}
	I0114 10:38:36.070684  130375 cri.go:129] skipping dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22 - not in ps
	I0114 10:38:36.070693  130375 cri.go:127] container: {ID:f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901 Status:running}
	I0114 10:38:36.070703  130375 cri.go:129] skipping f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901 - not in ps
	I0114 10:38:36.070748  130375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:38:36.078824  130375 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:38:36.078854  130375 kubeadm.go:627] restartCluster start
	I0114 10:38:36.078901  130375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:38:36.122640  130375 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:38:36.123181  130375 kubeconfig.go:92] found "test-preload-103656" server: "https://192.168.67.2:8443"
	I0114 10:38:36.123876  130375 kapi.go:59] client config for test-preload-103656: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:38:36.124406  130375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:38:36.132023  130375 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-01-14 10:37:14.960632955 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-01-14 10:38:35.362143242 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0114 10:38:36.132048  130375 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:38:36.132061  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:38:36.132119  130375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:38:36.160539  130375 cri.go:87] found id: "2e461cc338f9a09ff93a1d26f9c81992089fd75b8f78289564fb63e4a9931e0e"
	I0114 10:38:36.160572  130375 cri.go:87] found id: "79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1"
	I0114 10:38:36.160583  130375 cri.go:87] found id: "1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a"
	I0114 10:38:36.160592  130375 cri.go:87] found id: ""
	I0114 10:38:36.160599  130375 cri.go:232] Stopping containers: [2e461cc338f9a09ff93a1d26f9c81992089fd75b8f78289564fb63e4a9931e0e 79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1 1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a]
	I0114 10:38:36.160657  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:36.163886  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 2e461cc338f9a09ff93a1d26f9c81992089fd75b8f78289564fb63e4a9931e0e 79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1 1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a
	I0114 10:38:36.650086  130375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:38:36.727192  130375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:38:36.734488  130375 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Jan 14 10:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 14 10:37 /etc/kubernetes/scheduler.conf
	
	I0114 10:38:36.734539  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:38:36.741370  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:38:36.747971  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:38:36.754623  130375 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:38:36.754729  130375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 10:38:36.761563  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:38:36.768329  130375 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:38:36.768396  130375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 10:38:36.774780  130375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:38:36.781563  130375 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:38:36.781581  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:36.973702  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:37.742032  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:37.961156  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:38.011221  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:38.131693  130375 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:38:38.131756  130375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:38:38.144596  130375 api_server.go:71] duration metric: took 12.924067ms to wait for apiserver process to appear ...
	I0114 10:38:38.144647  130375 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:38:38.144661  130375 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0114 10:38:38.150219  130375 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0114 10:38:38.221645  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:38.221684  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:38.723441  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:38.723479  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:39.223074  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:39.223113  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:39.723804  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:39.723836  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:40.223490  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:40.223523  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W0114 10:38:40.722537  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:41.222838  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:41.722762  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:42.222627  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:42.722171  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:43.223164  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:43.723196  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:44.223018  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I0114 10:38:47.121508  130375 api_server.go:140] control plane version: v1.24.6
	I0114 10:38:47.121545  130375 api_server.go:130] duration metric: took 8.976889643s to wait for apiserver health ...
	I0114 10:38:47.121557  130375 cni.go:95] Creating CNI manager for ""
	I0114 10:38:47.121565  130375 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:38:47.123713  130375 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 10:38:47.125291  130375 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:38:47.130898  130375 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I0114 10:38:47.130926  130375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:38:47.232875  130375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:38:47.938528  130375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:38:47.945649  130375 system_pods.go:59] 7 kube-system pods found
	I0114 10:38:47.945684  130375 system_pods.go:61] "coredns-6d4b75cb6d-d9dwj" [f015d1c6-ed2c-4a5c-89ca-3aa07dc45194] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0114 10:38:47.945691  130375 system_pods.go:61] "etcd-test-preload-103656" [434495c0-10c7-4dc9-af84-dcec0b05fc6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 10:38:47.945698  130375 system_pods.go:61] "kindnet-dvmsq" [bf124d4e-df5c-446c-bb84-adb2312fb0d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0114 10:38:47.945706  130375 system_pods.go:61] "kube-controller-manager-test-preload-103656" [aea3925f-2ae7-4907-9061-948eb9d95520] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 10:38:47.945719  130375 system_pods.go:61] "kube-proxy-jnf8g" [c4b1d229-e57c-4245-b94c-5f87340ac132] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0114 10:38:47.945725  130375 system_pods.go:61] "kube-scheduler-test-preload-103656" [f63590d3-f765-4521-944f-10f98f6dc7a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 10:38:47.945730  130375 system_pods.go:61] "storage-provisioner" [0c70afb0-95ab-4b58-84ba-92f0658d439b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:38:47.945736  130375 system_pods.go:74] duration metric: took 7.183082ms to wait for pod list to return data ...
	I0114 10:38:47.945742  130375 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:38:47.947934  130375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:38:47.947957  130375 node_conditions.go:123] node cpu capacity is 8
	I0114 10:38:47.947970  130375 node_conditions.go:105] duration metric: took 2.221176ms to run NodePressure ...
	I0114 10:38:47.947986  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:48.173129  130375 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 10:38:48.177534  130375 kubeadm.go:778] kubelet initialised
	I0114 10:38:48.177560  130375 kubeadm.go:779] duration metric: took 4.40386ms waiting for restarted kubelet to initialise ...
	I0114 10:38:48.177570  130375 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:38:48.182970  130375 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace to be "Ready" ...
	I0114 10:38:50.230890  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:52.231044  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:54.231722  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:56.731766  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:59.230873  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:01.231296  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:01.730700  130375 pod_ready.go:92] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"True"
	I0114 10:39:01.730729  130375 pod_ready.go:81] duration metric: took 13.547734534s waiting for pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace to be "Ready" ...
	I0114 10:39:01.730745  130375 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-103656" in "kube-system" namespace to be "Ready" ...
	I0114 10:39:03.740578  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:05.740633  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:08.239656  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:10.240563  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:12.740125  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:14.740485  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:17.240667  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:19.739861  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:21.740285  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:23.740687  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:26.240438  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:28.240638  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:30.740382  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:33.240772  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:35.739862  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:37.739916  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:39.740222  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:42.240563  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:44.740477  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:47.239316  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:49.240030  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:51.240588  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:53.242557  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:55.739571  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:57.740011  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:59.740216  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:01.740337  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:03.740857  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:05.740910  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:08.240164  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:10.240244  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:12.242658  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:14.740962  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:17.240319  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:19.740715  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:22.240738  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:24.739464  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:26.740286  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:29.240377  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:31.240594  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:33.741864  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:36.242080  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:38.740232  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:41.239877  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:43.740351  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:46.242521  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:48.739651  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:50.739975  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:52.740088  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:54.740137  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:56.740342  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:59.240517  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:01.240708  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:03.739815  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:05.739861  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:07.740249  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:09.740580  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:12.239661  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:14.239867  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:16.240137  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:18.740329  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:21.240196  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:23.739494  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:25.740245  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:28.240321  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:30.739941  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:32.740332  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:35.240389  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:37.739611  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:39.740466  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:42.240551  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:44.740029  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:47.239176  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:49.240287  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:51.740204  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:53.740740  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:56.239972  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:58.739548  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:00.740693  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:03.239951  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:05.740324  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:08.239393  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:10.739405  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:12.740554  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:15.239561  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:17.739425  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:19.740051  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:22.240411  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:24.740035  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:27.240177  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:29.240744  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:31.739990  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:34.240535  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:36.739791  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:39.240303  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:41.740554  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:44.239594  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:46.240573  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:48.740328  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:50.740616  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:53.239696  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:55.240770  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:57.739934  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:59.740686  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:43:01.735198  130375 pod_ready.go:81] duration metric: took 4m0.00443583s waiting for pod "etcd-test-preload-103656" in "kube-system" namespace to be "Ready" ...
	E0114 10:43:01.735242  130375 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-103656" in "kube-system" namespace to be "Ready" (will not retry!)
	I0114 10:43:01.735266  130375 pod_ready.go:38] duration metric: took 4m13.557680414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:43:01.735298  130375 kubeadm.go:631] restartCluster took 4m25.6564342s
	W0114 10:43:01.735464  130375 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0114 10:43:01.735504  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0114 10:43:03.386915  130375 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.651383008s)
	I0114 10:43:03.386970  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:43:03.396144  130375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:43:03.402902  130375 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:43:03.402960  130375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:43:03.409317  130375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:43:03.409355  130375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:43:03.445845  130375 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0114 10:43:03.445947  130375 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:43:03.472908  130375 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:43:03.473085  130375 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:43:03.473148  130375 kubeadm.go:317] OS: Linux
	I0114 10:43:03.473233  130375 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:43:03.473338  130375 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:43:03.473421  130375 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:43:03.473483  130375 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:43:03.473539  130375 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:43:03.473595  130375 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:43:03.473649  130375 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:43:03.473710  130375 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:43:03.473792  130375 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:43:03.545232  130375 kubeadm.go:317] W0114 10:43:03.440995    6524 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:43:03.545502  130375 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:43:03.545641  130375 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:43:03.545723  130375 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0114 10:43:03.545781  130375 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0114 10:43:03.545843  130375 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0114 10:43:03.545950  130375 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0114 10:43:03.546035  130375 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 10:43:03.546248  130375 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.440995    6524 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.440995    6524 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 10:43:03.546284  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0114 10:43:03.874686  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:43:03.884299  130375 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:43:03.884355  130375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:43:03.891278  130375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:43:03.891322  130375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:43:03.924537  130375 kubeadm.go:317] W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:43:03.955490  130375 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:43:04.017449  130375 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:43:04.017559  130375 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0114 10:43:04.017608  130375 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0114 10:43:04.017664  130375 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0114 10:43:04.017800  130375 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0114 10:43:04.017909  130375 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 10:43:04.019616  130375 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0114 10:43:04.019700  130375 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:43:04.019798  130375 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:43:04.019933  130375 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:43:04.020004  130375 kubeadm.go:317] OS: Linux
	I0114 10:43:04.020056  130375 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:43:04.020136  130375 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:43:04.020206  130375 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:43:04.020268  130375 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:43:04.020310  130375 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:43:04.020369  130375 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:43:04.020438  130375 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:43:04.020500  130375 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:43:04.020561  130375 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:43:04.020620  130375 kubeadm.go:398] StartCluster complete in 4m28.059804219s
	I0114 10:43:04.020658  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:43:04.020706  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:43:04.043514  130375 cri.go:87] found id: ""
	I0114 10:43:04.043533  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.043542  130375 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:43:04.043548  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:43:04.043601  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:43:04.066568  130375 cri.go:87] found id: ""
	I0114 10:43:04.066589  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.066596  130375 logs.go:276] No container was found matching "etcd"
	I0114 10:43:04.066602  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:43:04.066643  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:43:04.089342  130375 cri.go:87] found id: ""
	I0114 10:43:04.089369  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.089378  130375 logs.go:276] No container was found matching "coredns"
	I0114 10:43:04.089386  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:43:04.089436  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:43:04.112375  130375 cri.go:87] found id: ""
	I0114 10:43:04.112402  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.112414  130375 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:43:04.112423  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:43:04.112475  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:43:04.134889  130375 cri.go:87] found id: ""
	I0114 10:43:04.134917  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.134926  130375 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:43:04.134935  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:43:04.134978  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:43:04.157596  130375 cri.go:87] found id: ""
	I0114 10:43:04.157621  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.157627  130375 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:43:04.157634  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:43:04.157674  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:43:04.179584  130375 cri.go:87] found id: ""
	I0114 10:43:04.179612  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.179621  130375 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:43:04.179630  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:43:04.179709  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:43:04.201868  130375 cri.go:87] found id: ""
	I0114 10:43:04.201894  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.201903  130375 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:43:04.201916  130375 logs.go:123] Gathering logs for kubelet ...
	I0114 10:43:04.201932  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:43:04.270344  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050446    4197 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.270544  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050472    4197 projected.go:192] Error preparing data for projected volume kube-api-access-8zqx7 for pod kube-system/kindnet-dvmsq: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.270731  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050496    4197 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.270877  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050536    4197 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271282  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050557    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf124d4e-df5c-446c-bb84-adb2312fb0d7-kube-api-access-8zqx7 podName:bf124d4e-df5c-446c-bb84-adb2312fb0d7 nodeName:}" failed. No retries permitted until 2023-01-14 10:38:51.050533394 +0000 UTC m=+13.089583874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8zqx7" (UniqueName: "kubernetes.io/projected/bf124d4e-df5c-446c-bb84-adb2312fb0d7-kube-api-access-8zqx7") pod "kindnet-dvmsq" (UID: "bf124d4e-df5c-446c-bb84-adb2312fb0d7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271443  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050563    4197 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271586  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050586    4197 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271775  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050598    4197 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271958  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050630    4197 projected.go:192] Error preparing data for projected volume kube-api-access-5jwgd for pod kube-system/coredns-6d4b75cb6d-d9dwj: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.272144  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050647    4197 projected.go:192] Error preparing data for projected volume kube-api-access-2ptl2 for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.272554  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050689    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f015d1c6-ed2c-4a5c-89ca-3aa07dc45194-kube-api-access-5jwgd podName:f015d1c6-ed2c-4a5c-89ca-3aa07dc45194 nodeName:}" failed. No retries permitted until 2023-01-14 10:38:49.050670598 +0000 UTC m=+11.089721091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5jwgd" (UniqueName: "kubernetes.io/projected/f015d1c6-ed2c-4a5c-89ca-3aa07dc45194-kube-api-access-5jwgd") pod "coredns-6d4b75cb6d-d9dwj" (UID: "f015d1c6-ed2c-4a5c-89ca-3aa07dc45194") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.272980  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050713    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c70afb0-95ab-4b58-84ba-92f0658d439b-kube-api-access-2ptl2 podName:0c70afb0-95ab-4b58-84ba-92f0658d439b nodeName:}" failed. No retries permitted until 2023-01-14 10:38:51.050699422 +0000 UTC m=+13.089749893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2ptl2" (UniqueName: "kubernetes.io/projected/0c70afb0-95ab-4b58-84ba-92f0658d439b-kube-api-access-2ptl2") pod "storage-provisioner" (UID: "0c70afb0-95ab-4b58-84ba-92f0658d439b") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.273167  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050777    4197 projected.go:192] Error preparing data for projected volume kube-api-access-7dfp9 for pod kube-system/kube-proxy-jnf8g: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.273586  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050844    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b1d229-e57c-4245-b94c-5f87340ac132-kube-api-access-7dfp9 podName:c4b1d229-e57c-4245-b94c-5f87340ac132 nodeName:}" failed. No retries permitted until 2023-01-14 10:38:51.050827219 +0000 UTC m=+13.089877705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7dfp9" (UniqueName: "kubernetes.io/projected/c4b1d229-e57c-4245-b94c-5f87340ac132-kube-api-access-7dfp9") pod "kube-proxy-jnf8g" (UID: "c4b1d229-e57c-4245-b94c-5f87340ac132") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.293959  130375 logs.go:123] Gathering logs for dmesg ...
	I0114 10:43:04.293985  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:43:04.307811  130375 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:43:04.307842  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:43:04.500460  130375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:43:04.500487  130375 logs.go:123] Gathering logs for containerd ...
	I0114 10:43:04.500502  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:43:04.559152  130375 logs.go:123] Gathering logs for container status ...
	I0114 10:43:04.559197  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0114 10:43:04.584531  130375 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0114 10:43:04.584568  130375 out.go:239] * 
	* 
	W0114 10:43:04.584725  130375 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:43:04.584753  130375 out.go:239] * 
	* 
	W0114 10:43:04.585563  130375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:43:04.588003  130375 out.go:177] X Problems detected in kubelet:
	I0114 10:43:04.589229  130375 out.go:177]   Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050446    4197 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.590530  130375 out.go:177]   Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050472    4197 projected.go:192] Error preparing data for projected volume kube-api-access-8zqx7 for pod kube-system/kindnet-dvmsq: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.591776  130375 out.go:177]   Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050496    4197 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.593884  130375 out.go:177] 
	W0114 10:43:04.595189  130375 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:43:04.595274  130375 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0114 10:43:04.595330  130375 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0114 10:43:04.596623  130375 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-103656 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2023-01-14 10:43:04.640643974 +0000 UTC m=+2230.746674594
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-103656
helpers_test.go:235: (dbg) docker inspect test-preload-103656:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "134be96401305dc1a51e91f2f2cdcaf0a717b45b8d273a51170acb4d5e4732ce",
	        "Created": "2023-01-14T10:36:57.75552969Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 127167,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:36:58.226286757Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/134be96401305dc1a51e91f2f2cdcaf0a717b45b8d273a51170acb4d5e4732ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/134be96401305dc1a51e91f2f2cdcaf0a717b45b8d273a51170acb4d5e4732ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/134be96401305dc1a51e91f2f2cdcaf0a717b45b8d273a51170acb4d5e4732ce/hosts",
	        "LogPath": "/var/lib/docker/containers/134be96401305dc1a51e91f2f2cdcaf0a717b45b8d273a51170acb4d5e4732ce/134be96401305dc1a51e91f2f2cdcaf0a717b45b8d273a51170acb4d5e4732ce-json.log",
	        "Name": "/test-preload-103656",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-103656:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-103656",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/150110978bc2f15a119701db74fd05ade6a29d94a9fb31892efa87387f51a61f-init/diff:/var/lib/docker/overlay2/cfa67474dfffbd23c875ed1363951467d9d88e2b76451e5643f2505208741f3b/diff:/var/lib/docker/overlay2/073ec06077c9f139927a68d24e4f683141baf9acf954f7927a62d439b8e24069/diff:/var/lib/docker/overlay2/100e369464b40a65b67d4855b5a41f41832f93605f574ff35657d9b2d0ee5b4f/diff:/var/lib/docker/overlay2/e2f9a50fd4c46aeeaf52dd5d2c45c5548e516eaa4949cae4e8f8be3dda02e560/diff:/var/lib/docker/overlay2/6d3b34d6067ad9d3ff171a32fea0902c6748df9aeb5a46e12971cdc70934e200/diff:/var/lib/docker/overlay2/44f244a49f3260ebade676a0e6177935228bcd4504617609ee4343aa284e724c/diff:/var/lib/docker/overlay2/1cba83561484d9f781c67421553c95b75266d2217256379d5787e510ac28483f/diff:/var/lib/docker/overlay2/9ec5ab0f595877fa3d60d26e7aa243026d8b45fea861a3e12c469d81ab1ffe6c/diff:/var/lib/docker/overlay2/30d22319caaa0760daf22d54c95076cad3b970afb61aa7c018ac37b623117613/diff:/var/lib/docker/overlay2/1f5756
3ce3807317a405416fbe25b96e16e33708f4f97020c4f82e1e2b4da5ed/diff:/var/lib/docker/overlay2/604bdff9bf4c8bdcc970ae4f7e8734a5aa27c04fb328f61dea00c3740f12daba/diff:/var/lib/docker/overlay2/03f7c27604538c82d3d43dfde85aa33dc8f2658b93b51f500b27edd3b1aaed98/diff:/var/lib/docker/overlay2/f9ceccc940eb08b69d102c744810d1aff5795c7e9a58c20d43ca6857fa21b8ea/diff:/var/lib/docker/overlay2/576f7412e6f61feeea74cdfbae850007513e8aa407ce5e45f903c70ce2f89fe5/diff:/var/lib/docker/overlay2/958517a359371ca3276a50323466f96ec3d5d7687cb2f26c287a9a343fcbcd20/diff:/var/lib/docker/overlay2/c09247966342dd284c940bcd881b6187476a63e53055e9f378aaa25ceaa86263/diff:/var/lib/docker/overlay2/85bda0ea7bf5a8c05a6eb175b445c71a710e3e392fc1b70957e3902cec94586f/diff:/var/lib/docker/overlay2/7cde8ffb6999e9d99ff44b83daaf1a781dd6546a7a96eda5b901e88658c78f74/diff:/var/lib/docker/overlay2/92d42128dacdf015e3ce466b8e365093147199e2fffcda0192857efed322565f/diff:/var/lib/docker/overlay2/0f2dff826ddc5a3be056ecb8791656438fd8d9122e0bfa4bf808ff640ddd0366/diff:/var/lib/d
ocker/overlay2/44a9089aeee67c883a076dc1940e80698f487176c3d197f321518402ce7a4467/diff:/var/lib/docker/overlay2/6068fe71ba149c31fa6947b978b0755f11f334f9d40e14b5c9946cf9a103ca68/diff:/var/lib/docker/overlay2/adb5ed5619948c4b7e4d83048cd96cc3d6ded2ae453b67da2e120f4ada989e97/diff:/var/lib/docker/overlay2/d633ebbd9eed2900d2e31406be983b7d21e70ac3c07593de38c5cfb0628275ae/diff:/var/lib/docker/overlay2/87f4a27d0733b1bdf23169c5079f854d115bfd926c76a346d28259b8f2abc0f9/diff:/var/lib/docker/overlay2/4b514ac9d0ce1d6bff4ec77673304888b5a45fca7d9a52d872475d70a4bad242/diff:/var/lib/docker/overlay2/76f964a17c8531bd97500c5bf3aa0b003b317ad1c055c0d1c475d41666734b75/diff:/var/lib/docker/overlay2/0a0f3b972da362a17d673ffdcd0d42b3663faeed5e799b2b38868036d5cd1533/diff:/var/lib/docker/overlay2/a07c41d799979e1f64f7bf3d0bcd9a98b724ebea06eafa1a01b83c71c76f9d3c/diff:/var/lib/docker/overlay2/0be1fd774bf851dd17c525a17f8a015aa3c0f1f71b29033666a62cd2be3a495f/diff:/var/lib/docker/overlay2/62db7acc5b1cb93b6e26eb5c826b67cebb252c079fd5a060ba843227c91
c864f/diff:/var/lib/docker/overlay2/076dea682ce5421a9c145f8038044bf438f06c3635406efdf60ef350f109389f/diff:/var/lib/docker/overlay2/143de4d69dc548610d4e281cfb14bf70d7ed81172bee212fc15755591dea37b4/diff:/var/lib/docker/overlay2/89ecf87d7b563ffa220047c3bb13c7ea55ebb215cbd3d2731d795ce559d5b9b4/diff:/var/lib/docker/overlay2/e9f8c0a087f0832425535d00100392d8b267181825a52ae7291fb7fe7ab62614/diff:/var/lib/docker/overlay2/66fb715c26be36afdfe15f9e2562f7320c04421f7bff30da6424afc0395d1f19/diff:/var/lib/docker/overlay2/24d5a6709af6741b4216757263798c2fd2ffbe83a81f68619cd00e2107b4ff3d/diff:/var/lib/docker/overlay2/865a5915817b4d31f71061a418fcc1c284ee124c9b3a275c3676cb2b3fba32dd/diff:/var/lib/docker/overlay2/b33545ce05c040395c79c17ae2fc9b23755b589f9f6e2f94121abe1cc5c2869c/diff:/var/lib/docker/overlay2/22f66646b2dde6f03ac24f5affc8a43db7aaae6b2e9677ae4cf9e607238761e4/diff:/var/lib/docker/overlay2/789c281f8e044ab343c9800dc7431b8fbaf616ecd3419979e8a3dfbb605f8efe/diff:/var/lib/docker/overlay2/6dd50d303cdaa1e2fa047ed92b16580d8b0c2c
77552b9a13e0c356884add5310/diff:/var/lib/docker/overlay2/b1d8d5816bce1b48db468539e1bc343a7c87dee89fb1783174081611a7e0b2ee/diff:/var/lib/docker/overlay2/529b543dd76f6ad1b33944f7c0767adca9befb5d162c4c1bf13756f3c0048fb4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/150110978bc2f15a119701db74fd05ade6a29d94a9fb31892efa87387f51a61f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/150110978bc2f15a119701db74fd05ade6a29d94a9fb31892efa87387f51a61f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/150110978bc2f15a119701db74fd05ade6a29d94a9fb31892efa87387f51a61f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-103656",
	                "Source": "/var/lib/docker/volumes/test-preload-103656/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-103656",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-103656",
	                "name.minikube.sigs.k8s.io": "test-preload-103656",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d95d0cd591e8aa5b35e19756a9dd3d50d80b3dcd3345133007cf4780233ee7d2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d95d0cd591e8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-103656": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "134be9640130",
	                        "test-preload-103656"
	                    ],
	                    "NetworkID": "68dd9b009ea520fab02da04425741da6f1f7ea0c09363fe33aa360088f683e28",
	                    "EndpointID": "4907378d64c1ed25b6205ef3777c6e3d331c4be21b51e2599a1533ce11e1fabf",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-103656 -n test-preload-103656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-103656 -n test-preload-103656: exit status 2 (342.796963ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-103656 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822:/home/docker/cp-test_multinode-102822-m03_multinode-102822.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822 sudo cat                                       | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m03_multinode-102822.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt                       | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m02:/home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n                                                                 | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | multinode-102822-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-102822 ssh -n multinode-102822-m02 sudo cat                                   | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	|         | /home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-102822 node stop m03                                                          | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:30 UTC |
	| node    | multinode-102822 node start                                                             | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:30 UTC | 14 Jan 23 10:31 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC |                     |
	| stop    | -p multinode-102822                                                                     | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC | 14 Jan 23 10:31 UTC |
	| start   | -p multinode-102822                                                                     | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:31 UTC |                     |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC |                     |
	| node    | multinode-102822 node delete                                                            | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-102822 stop                                                                   | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	| start   | -p multinode-102822                                                                     | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-102822                                                                | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC |                     |
	| start   | -p multinode-102822-m02                                                                 | multinode-102822-m02 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-102822-m03                                                                 | multinode-102822-m03 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-102822                                                                 | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC |                     |
	| delete  | -p multinode-102822-m03                                                                 | multinode-102822-m03 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	| delete  | -p multinode-102822                                                                     | multinode-102822     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	| start   | -p test-preload-103656                                                                  | test-preload-103656  | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:37 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-103656                                                                  | test-preload-103656  | jenkins | v1.28.0 | 14 Jan 23 10:37 UTC | 14 Jan 23 10:37 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-103656                                                                  | test-preload-103656  | jenkins | v1.28.0 | 14 Jan 23 10:37 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:37:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:37:51.002864  130375 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:37:51.002987  130375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:37:51.002999  130375 out.go:309] Setting ErrFile to fd 2...
	I0114 10:37:51.003007  130375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:37:51.003477  130375 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:37:51.004152  130375 out.go:303] Setting JSON to false
	I0114 10:37:51.005372  130375 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4818,"bootTime":1673687853,"procs":502,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:37:51.005442  130375 start.go:135] virtualization: kvm guest
	I0114 10:37:51.007911  130375 out.go:177] * [test-preload-103656] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:37:51.009298  130375 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:37:51.009248  130375 notify.go:220] Checking for updates...
	I0114 10:37:51.011741  130375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:37:51.012942  130375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:37:51.014056  130375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:37:51.015244  130375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:37:51.016719  130375 config.go:180] Loaded profile config "test-preload-103656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0114 10:37:51.018319  130375 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0114 10:37:51.019428  130375 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:37:51.047829  130375 docker.go:138] docker version: linux-20.10.22
	I0114 10:37:51.047971  130375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:37:51.142168  130375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-14 10:37:51.068287951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:37:51.142272  130375 docker.go:255] overlay module found
	I0114 10:37:51.145161  130375 out.go:177] * Using the docker driver based on existing profile
	I0114 10:37:51.146398  130375 start.go:294] selected driver: docker
	I0114 10:37:51.146412  130375 start.go:838] validating driver "docker" against &{Name:test-preload-103656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103656 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:37:51.146499  130375 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:37:51.147317  130375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:37:51.239425  130375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-14 10:37:51.166649377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:37:51.239717  130375 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:37:51.239746  130375 cni.go:95] Creating CNI manager for ""
	I0114 10:37:51.239754  130375 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:37:51.239769  130375 start_flags.go:319] config:
	{Name:test-preload-103656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-103656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:37:51.242659  130375 out.go:177] * Starting control plane node test-preload-103656 in cluster test-preload-103656
	I0114 10:37:51.243956  130375 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:37:51.245211  130375 out.go:177] * Pulling base image ...
	I0114 10:37:51.246721  130375 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:37:51.246761  130375 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:37:51.270626  130375 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:37:51.270649  130375 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:37:51.506522  130375 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0114 10:37:51.506551  130375 cache.go:57] Caching tarball of preloaded images
	I0114 10:37:51.506853  130375 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:37:51.508905  130375 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I0114 10:37:51.510171  130375 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:37:51.608881  130375 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0114 10:38:05.447131  130375 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:38:05.447223  130375 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:38:06.329607  130375 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I0114 10:38:06.329763  130375 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/config.json ...
	I0114 10:38:06.329967  130375 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:38:06.330010  130375 start.go:364] acquiring machines lock for test-preload-103656: {Name:mkd8e957d814494daa90511121940947d1656700 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:38:06.330125  130375 start.go:368] acquired machines lock for "test-preload-103656" in 78.365µs
	I0114 10:38:06.330141  130375 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:38:06.330147  130375 fix.go:55] fixHost starting: 
	I0114 10:38:06.330340  130375 cli_runner.go:164] Run: docker container inspect test-preload-103656 --format={{.State.Status}}
	I0114 10:38:06.354371  130375 fix.go:103] recreateIfNeeded on test-preload-103656: state=Running err=<nil>
	W0114 10:38:06.354400  130375 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:38:06.356629  130375 out.go:177] * Updating the running docker "test-preload-103656" container ...
	I0114 10:38:06.357999  130375 machine.go:88] provisioning docker machine ...
	I0114 10:38:06.358031  130375 ubuntu.go:169] provisioning hostname "test-preload-103656"
	I0114 10:38:06.358081  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.381331  130375 main.go:134] libmachine: Using SSH client type: native
	I0114 10:38:06.381503  130375 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0114 10:38:06.381521  130375 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-103656 && echo "test-preload-103656" | sudo tee /etc/hostname
	I0114 10:38:06.507868  130375 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-103656
	
	I0114 10:38:06.507942  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.531561  130375 main.go:134] libmachine: Using SSH client type: native
	I0114 10:38:06.531797  130375 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0114 10:38:06.531820  130375 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-103656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-103656/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-103656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:38:06.651516  130375 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:38:06.651548  130375 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:38:06.651570  130375 ubuntu.go:177] setting up certificates
	I0114 10:38:06.651579  130375 provision.go:83] configureAuth start
	I0114 10:38:06.651623  130375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-103656
	I0114 10:38:06.674255  130375 provision.go:138] copyHostCerts
	I0114 10:38:06.674313  130375 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:38:06.674322  130375 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:38:06.674383  130375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:38:06.674501  130375 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:38:06.674511  130375 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:38:06.674534  130375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:38:06.674580  130375 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:38:06.674587  130375 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:38:06.674606  130375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:38:06.674647  130375 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.test-preload-103656 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-103656]
	I0114 10:38:06.758438  130375 provision.go:172] copyRemoteCerts
	I0114 10:38:06.758498  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:38:06.758531  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.781782  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:06.867004  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0114 10:38:06.883984  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:38:06.900643  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:38:06.916980  130375 provision.go:86] duration metric: configureAuth took 265.387379ms
	I0114 10:38:06.917007  130375 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:38:06.917175  130375 config.go:180] Loaded profile config "test-preload-103656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I0114 10:38:06.917189  130375 machine.go:91] provisioned docker machine in 559.172216ms
	I0114 10:38:06.917197  130375 start.go:300] post-start starting for "test-preload-103656" (driver="docker")
	I0114 10:38:06.917204  130375 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:38:06.917243  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:38:06.917284  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:06.939506  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.022998  130375 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:38:07.025772  130375 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:38:07.025799  130375 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:38:07.025807  130375 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:38:07.025812  130375 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:38:07.025820  130375 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:38:07.025870  130375 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:38:07.025928  130375 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:38:07.025999  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:38:07.032480  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:38:07.049250  130375 start.go:303] post-start completed in 132.037948ms
	I0114 10:38:07.049327  130375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:38:07.049372  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:07.072552  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.152367  130375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:38:07.156222  130375 fix.go:57] fixHost completed within 826.071854ms
	I0114 10:38:07.156243  130375 start.go:83] releasing machines lock for "test-preload-103656", held for 826.105821ms
	I0114 10:38:07.156316  130375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-103656
	I0114 10:38:07.178629  130375 ssh_runner.go:195] Run: cat /version.json
	I0114 10:38:07.178671  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:07.178723  130375 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 10:38:07.178770  130375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-103656
	I0114 10:38:07.204106  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.205840  130375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/test-preload-103656/id_rsa Username:docker}
	I0114 10:38:07.306907  130375 ssh_runner.go:195] Run: systemctl --version
	I0114 10:38:07.310615  130375 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:38:07.322602  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:38:07.331962  130375 docker.go:189] disabling docker service ...
	I0114 10:38:07.332014  130375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:38:07.341188  130375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:38:07.349832  130375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:38:07.452797  130375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:38:07.546759  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:38:07.555705  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:38:07.567582  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0114 10:38:07.575254  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:38:07.583779  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:38:07.591757  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:38:07.599621  130375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:38:07.605752  130375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:38:07.611809  130375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:38:07.712619  130375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:38:07.790522  130375 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:38:07.790586  130375 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:38:07.794469  130375 start.go:472] Will wait 60s for crictl version
	I0114 10:38:07.794530  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:07.797517  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:38:07.822052  130375 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:38:07Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:38:18.871493  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:38:18.894957  130375 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:38:18.895006  130375 ssh_runner.go:195] Run: containerd --version
	I0114 10:38:18.917350  130375 ssh_runner.go:195] Run: containerd --version
	I0114 10:38:18.941262  130375 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
	I0114 10:38:18.942722  130375 cli_runner.go:164] Run: docker network inspect test-preload-103656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:38:18.964764  130375 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0114 10:38:18.968124  130375 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:38:18.968201  130375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:38:18.990809  130375 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I0114 10:38:18.990864  130375 ssh_runner.go:195] Run: which lz4
	I0114 10:38:18.993842  130375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0114 10:38:18.996676  130375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0114 10:38:18.996704  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I0114 10:38:19.887901  130375 containerd.go:496] Took 0.894080 seconds to copy over tarball
	I0114 10:38:19.887965  130375 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0114 10:38:22.578454  130375 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690461959s)
	I0114 10:38:22.578487  130375 containerd.go:503] Took 2.690557 seconds t extract the tarball
	I0114 10:38:22.578499  130375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0114 10:38:22.596750  130375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:38:22.700364  130375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:38:22.772561  130375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:38:22.801353  130375 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0114 10:38:22.801435  130375 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:22.801455  130375 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:22.801476  130375 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:22.801498  130375 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:22.801484  130375 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:22.801456  130375 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:22.801583  130375 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:22.801566  130375 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0114 10:38:22.802709  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:22.802715  130375 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:22.802722  130375 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:22.802737  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:22.802715  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:22.802770  130375 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0114 10:38:22.802853  130375 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:22.802932  130375 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:23.243056  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0114 10:38:23.248174  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0114 10:38:23.270429  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0114 10:38:23.286046  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I0114 10:38:23.306009  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I0114 10:38:23.352775  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I0114 10:38:23.366810  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I0114 10:38:23.638648  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:38:24.020996  130375 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0114 10:38:24.021227  130375 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0114 10:38:24.021169  130375 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0114 10:38:24.021313  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.021332  130375 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:24.021391  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.128609  130375 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0114 10:38:24.128660  130375 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:24.128706  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.147374  130375 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I0114 10:38:24.147425  130375 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:24.147463  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.222049  130375 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I0114 10:38:24.222094  130375 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:24.222135  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.252508  130375 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I0114 10:38:24.252563  130375 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:24.252614  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.252621  130375 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I0114 10:38:24.252661  130375 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:24.252703  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.350906  130375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0114 10:38:24.350931  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0114 10:38:24.350955  130375 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:24.350994  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:24.350995  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0114 10:38:24.351085  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0114 10:38:24.351175  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I0114 10:38:24.351214  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I0114 10:38:24.351282  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I0114 10:38:24.351321  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I0114 10:38:26.457836  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (2.10675309s)
	I0114 10:38:26.457871  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0114 10:38:26.457879  130375 ssh_runner.go:235] Completed: which crictl: (2.106855647s)
	I0114 10:38:26.457914  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (2.106963875s)
	I0114 10:38:26.457929  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0114 10:38:26.457934  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:38:26.457932  130375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0114 10:38:26.457991  130375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0114 10:38:26.457987  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (2.106883834s)
	I0114 10:38:26.458010  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0114 10:38:26.458032  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (2.106838738s)
	I0114 10:38:26.458042  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I0114 10:38:26.458044  130375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0114 10:38:26.458061  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (2.106824178s)
	I0114 10:38:26.458075  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I0114 10:38:26.458138  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (2.10676597s)
	I0114 10:38:26.458154  130375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6: (2.106848252s)
	I0114 10:38:26.458160  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I0114 10:38:26.458161  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I0114 10:38:26.487938  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0114 10:38:26.487965  130375 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0114 10:38:26.488021  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0114 10:38:26.488066  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0114 10:38:26.488140  130375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0114 10:38:26.488171  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0114 10:38:26.488226  130375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:38:32.755504  130375 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (6.267452686s)
	I0114 10:38:32.755578  130375 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I0114 10:38:32.755651  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0114 10:38:32.755582  130375 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (6.267329372s)
	I0114 10:38:32.755716  130375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0114 10:38:33.756342  130375 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7: (1.000655944s)
	I0114 10:38:33.756374  130375 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0114 10:38:33.756397  130375 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0114 10:38:33.756437  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0114 10:38:34.834072  130375 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.077603132s)
	I0114 10:38:34.834103  130375 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0114 10:38:34.834135  130375 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:38:34.834187  130375 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:38:35.234947  130375 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0114 10:38:35.235008  130375 cache_images.go:92] LoadImages completed in 12.433632171s
	W0114 10:38:35.235189  130375 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
	I0114 10:38:35.235251  130375 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:38:35.261888  130375 cni.go:95] Creating CNI manager for ""
	I0114 10:38:35.261913  130375 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:38:35.261928  130375 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:38:35.261947  130375 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-103656 NodeName:test-preload-103656 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:38:35.262107  130375 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-103656"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:38:35.262227  130375 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-103656 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-103656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:38:35.262275  130375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I0114 10:38:35.270359  130375 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:38:35.270436  130375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:38:35.323452  130375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I0114 10:38:35.339112  130375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:38:35.354377  130375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0114 10:38:35.368005  130375 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:38:35.371353  130375 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656 for IP: 192.168.67.2
	I0114 10:38:35.371467  130375 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:38:35.371525  130375 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:38:35.371624  130375 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/client.key
	I0114 10:38:35.371732  130375 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/apiserver.key.c7fa3a9e
	I0114 10:38:35.371782  130375 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/proxy-client.key
	I0114 10:38:35.371906  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:38:35.371945  130375 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:38:35.371959  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:38:35.371990  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:38:35.372028  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:38:35.372073  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:38:35.372128  130375 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:38:35.372989  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:38:35.438659  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:38:35.459078  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:38:35.521122  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 10:38:35.543193  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:38:35.565246  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:38:35.636400  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:38:35.658645  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:38:35.679179  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:38:35.736936  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:38:35.756587  130375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:38:35.776144  130375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:38:35.829677  130375 ssh_runner.go:195] Run: openssl version
	I0114 10:38:35.835452  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:38:35.844615  130375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:38:35.848677  130375 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:38:35.848736  130375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:38:35.854366  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:38:35.862771  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:38:35.871237  130375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:38:35.874979  130375 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:38:35.875031  130375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:38:35.925670  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:38:35.934438  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:38:35.943308  130375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:38:35.946992  130375 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:38:35.947066  130375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:38:35.952914  130375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:38:35.960823  130375 kubeadm.go:396] StartCluster: {Name:test-preload-103656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-103656 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:38:35.960927  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:38:35.960969  130375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:38:36.030809  130375 cri.go:87] found id: "79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1"
	I0114 10:38:36.030836  130375 cri.go:87] found id: "1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a"
	I0114 10:38:36.030848  130375 cri.go:87] found id: ""
	I0114 10:38:36.030901  130375 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0114 10:38:36.069795  130375 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e","pid":3374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e/rootfs","created":"2023-01-14T10:38:26.038691683Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-103656_b5ce6aaa8feda8e03d2ece032691651a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-103656","io.kuber
netes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","pid":3389,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608/rootfs","created":"2023-01-14T10:38:26.042050411Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_0c70afb0-95ab-4b58-84ba-92f0658d439b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.c
ri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a","pid":3438,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a/rootfs","created":"2023-01-14T10:38:26.268614811Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f","pid":3619,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f/rootfs","created":"2023-01-14T10:38:27.25000565Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-d9dwj_f015d1c6-ed2c-4a5c-89ca-3aa07dc45194","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-d9dwj","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896","pid":33
75,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896/rootfs","created":"2023-01-14T10:38:26.043120078Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-103656_cf327063d0646260216423cc46e62e98","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573
599bfde8b13","pid":2193,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13/rootfs","created":"2023-01-14T10:37:38.323610353Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jnf8g_c4b1d229-e57c-4245-b94c-5f87340ac132","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-jnf8g","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50",
"pid":2641,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50/rootfs","created":"2023-01-14T10:37:46.231018242Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-d9dwj","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","pid":2582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","rootfs":"/run/containerd/io
.containerd.runtime.v2.task/k8s.io/46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a/rootfs","created":"2023-01-14T10:37:46.123706723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_0c70afb0-95ab-4b58-84ba-92f0658d439b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","pid":1510,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","rootfs":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113/rootfs","created":"2023-01-14T10:37:18.662644331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-103656_b5ce6aaa8feda8e03d2ece032691651a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","rootfs":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea/rootfs","created":"2023-01-14T10:37:18.664453109Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-103656_eed4bab1d5401611f9e0dbfc25eace67","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","rootfs":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab/rootfs","created":"2023-01-14T10:37:18.66445572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-103656_5e3f9e81a771cae657895e0ac9a4db8b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c","pid":2642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79a0a4f98c70307c7c45ff7122be3118ff508b1de
76fdb59984eed1bfaa0784c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c/rootfs","created":"2023-01-14T10:37:46.231183602Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","pid":1513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691/rootfs","created":
"2023-01-14T10:37:18.662850526Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-103656_cf327063d0646260216423cc46e62e98","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","pid":2583,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe038
59/rootfs","created":"2023-01-14T10:37:46.121931291Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-d9dwj_f015d1c6-ed2c-4a5c-89ca-3aa07dc45194","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-d9dwj","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d","pid":1647,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7
950aaa4e348b64d/rootfs","created":"2023-01-14T10:37:18.849530018Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4/rootfs","created":"2023-01-14T10:37:18.840907916Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kuber
netes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21","pid":2228,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21/rootfs","created":"2023-01-14T10:37:38.410230285Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"218b757d1439e7e68e31cb0d83782657befb7c1dd
78f585ff3573599bfde8b13","io.kubernetes.cri.sandbox-name":"kube-proxy-jnf8g","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36","pid":1627,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36/rootfs","created":"2023-01-14T10:37:18.839041607Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","pid":2192,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba/rootfs","created":"2023-01-14T10:37:38.321323425Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dvmsq_bf124d4e-df5c-446c-bb84-adb2312fb0d7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-dvmsq","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id"
:"bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34","pid":3571,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34/rootfs","created":"2023-01-14T10:38:27.052699215Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-103656_eed4bab1d5401611f9e0dbfc25eace67","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f","pid":2449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f/rootfs","created":"2023-01-14T10:37:41.121120967Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba","io.kubernetes.cri.sandbox-name":"kindnet-dvmsq","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c/rootfs","created":"2023-01-14T10:38:27.126924717Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-dvmsq_bf124d4e-df5c-446c-bb84-adb2312fb0d7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-dvmsq","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22","pid":1654,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfdd23
b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22/rootfs","created":"2023-01-14T10:37:18.849606176Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113","io.kubernetes.cri.sandbox-name":"etcd-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901","pid":3371,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901/rootfs","cre
ated":"2023-01-14T10:38:26.042915189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-103656_5e3f9e81a771cae657895e0ac9a4db8b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-103656","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I0114 10:38:36.070165  130375 cri.go:124] list returned 24 containers
	I0114 10:38:36.070183  130375 cri.go:127] container: {ID:006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e Status:running}
	I0114 10:38:36.070225  130375 cri.go:129] skipping 006c42df5db5dc837ac62d8578941ce26c6860e44f7b186318e497cf5a6ad66e - not in ps
	I0114 10:38:36.070230  130375 cri.go:127] container: {ID:02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608 Status:running}
	I0114 10:38:36.070243  130375 cri.go:129] skipping 02631890e833c7e357ee07dc2a338a27a51d7911c159e64333a5ecf20f625608 - not in ps
	I0114 10:38:36.070248  130375 cri.go:127] container: {ID:1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a Status:running}
	I0114 10:38:36.070260  130375 cri.go:133] skipping {1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a running}: state = "running", want "paused"
	I0114 10:38:36.070274  130375 cri.go:127] container: {ID:1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f Status:running}
	I0114 10:38:36.070287  130375 cri.go:129] skipping 1d177b3622d62d3631ceaad24607c3129dbf89276356217fc286886c2395b92f - not in ps
	I0114 10:38:36.070297  130375 cri.go:127] container: {ID:1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896 Status:running}
	I0114 10:38:36.070308  130375 cri.go:129] skipping 1fb38d489d58b6b7b9b6562ef5949ab14d853d020206d692af57817778e6c896 - not in ps
	I0114 10:38:36.070313  130375 cri.go:127] container: {ID:218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13 Status:running}
	I0114 10:38:36.070321  130375 cri.go:129] skipping 218b757d1439e7e68e31cb0d83782657befb7c1dd78f585ff3573599bfde8b13 - not in ps
	I0114 10:38:36.070327  130375 cri.go:127] container: {ID:3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50 Status:running}
	I0114 10:38:36.070346  130375 cri.go:129] skipping 3420de3a84b75135272a65078dbb3a776413f5417b908a91bb8026376e60ec50 - not in ps
	I0114 10:38:36.070356  130375 cri.go:127] container: {ID:46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a Status:running}
	I0114 10:38:36.070364  130375 cri.go:129] skipping 46d34e1e72ee3c783a03d59dc0ccf77f2df7255e1e614fd335c0bdec800d1f5a - not in ps
	I0114 10:38:36.070373  130375 cri.go:127] container: {ID:665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113 Status:running}
	I0114 10:38:36.070381  130375 cri.go:129] skipping 665adf00f0d7c77ae1000851ba8f27cf9d36ae9d964190c13ee943c17f39e113 - not in ps
	I0114 10:38:36.070390  130375 cri.go:127] container: {ID:6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea Status:running}
	I0114 10:38:36.070402  130375 cri.go:129] skipping 6a033048aff2fe69221c46821367fc8951e3a94cb00c8ecdc2318e045c85a6ea - not in ps
	I0114 10:38:36.070411  130375 cri.go:127] container: {ID:77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab Status:running}
	I0114 10:38:36.070418  130375 cri.go:129] skipping 77785e126a5497304d1bc6a9b1964e3bb9b08a4f92e799515550f6fbc562e6ab - not in ps
	I0114 10:38:36.070428  130375 cri.go:127] container: {ID:79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c Status:running}
	I0114 10:38:36.070435  130375 cri.go:129] skipping 79a0a4f98c70307c7c45ff7122be3118ff508b1de76fdb59984eed1bfaa0784c - not in ps
	I0114 10:38:36.070444  130375 cri.go:127] container: {ID:7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691 Status:running}
	I0114 10:38:36.070452  130375 cri.go:129] skipping 7af4e6ad3084feea2b1ac4d35aad7888c6d942db1da777d79159ca52b4617691 - not in ps
	I0114 10:38:36.070462  130375 cri.go:127] container: {ID:7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859 Status:running}
	I0114 10:38:36.070469  130375 cri.go:129] skipping 7cbf34b537733d49fee2d6e73037a1de9b98c79773da940559d3e727bfe03859 - not in ps
	I0114 10:38:36.070475  130375 cri.go:127] container: {ID:8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d Status:running}
	I0114 10:38:36.070485  130375 cri.go:129] skipping 8524bb853aa84cc1873ce7492815149f7484595ef26e8aff7950aaa4e348b64d - not in ps
	I0114 10:38:36.070490  130375 cri.go:127] container: {ID:8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4 Status:running}
	I0114 10:38:36.070497  130375 cri.go:129] skipping 8b9683d31babc38c1ea06b48b8013f474408a5f18afd3473b5eda86741dec3a4 - not in ps
	I0114 10:38:36.070505  130375 cri.go:127] container: {ID:927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21 Status:running}
	I0114 10:38:36.070523  130375 cri.go:129] skipping 927dbe1ba33930b94989cf72c2b9031c786c5cde278c59fcff9f08e1fe71cd21 - not in ps
	I0114 10:38:36.070562  130375 cri.go:127] container: {ID:a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36 Status:running}
	I0114 10:38:36.070581  130375 cri.go:129] skipping a1ca7a49cc2dfaba6509942ad6340bb93cdb8a32d816d2586538c5f66ccbab36 - not in ps
	I0114 10:38:36.070591  130375 cri.go:127] container: {ID:ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba Status:running}
	I0114 10:38:36.070603  130375 cri.go:129] skipping ad10c5b3b38d0e791780236d0765544f263eefa7d0d95a31133cf48b720588ba - not in ps
	I0114 10:38:36.070616  130375 cri.go:127] container: {ID:bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34 Status:running}
	I0114 10:38:36.070626  130375 cri.go:129] skipping bc72ba592740125fe859185943194c656b40e69498eb8675c9c5200b3c547d34 - not in ps
	I0114 10:38:36.070635  130375 cri.go:127] container: {ID:c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f Status:running}
	I0114 10:38:36.070645  130375 cri.go:129] skipping c8516f0706239aa9457becdc8d6b24e390c22220b5be2a91fe49ad30c8a2315f - not in ps
	I0114 10:38:36.070654  130375 cri.go:127] container: {ID:d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c Status:running}
	I0114 10:38:36.070665  130375 cri.go:129] skipping d1db9355b2df09866c8802f552b87eb174475d6401662d28b70542a79c8a5a7c - not in ps
	I0114 10:38:36.070675  130375 cri.go:127] container: {ID:dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22 Status:running}
	I0114 10:38:36.070684  130375 cri.go:129] skipping dfdd23b7144d3f224d75e7a733b427306bf908cfb8e4fbecd3931c0c104cfb22 - not in ps
	I0114 10:38:36.070693  130375 cri.go:127] container: {ID:f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901 Status:running}
	I0114 10:38:36.070703  130375 cri.go:129] skipping f7609094bdec471b5427903aaef9a5cf2613c02ba071c4d51c3e88ef5e326901 - not in ps
	I0114 10:38:36.070748  130375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:38:36.078824  130375 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:38:36.078854  130375 kubeadm.go:627] restartCluster start
	I0114 10:38:36.078901  130375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:38:36.122640  130375 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:38:36.123181  130375 kubeconfig.go:92] found "test-preload-103656" server: "https://192.168.67.2:8443"
	I0114 10:38:36.123876  130375 kapi.go:59] client config for test-preload-103656: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/test-preload-103656/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:38:36.124406  130375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:38:36.132023  130375 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-01-14 10:37:14.960632955 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-01-14 10:38:35.362143242 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0114 10:38:36.132048  130375 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:38:36.132061  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:38:36.132119  130375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:38:36.160539  130375 cri.go:87] found id: "2e461cc338f9a09ff93a1d26f9c81992089fd75b8f78289564fb63e4a9931e0e"
	I0114 10:38:36.160572  130375 cri.go:87] found id: "79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1"
	I0114 10:38:36.160583  130375 cri.go:87] found id: "1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a"
	I0114 10:38:36.160592  130375 cri.go:87] found id: ""
	I0114 10:38:36.160599  130375 cri.go:232] Stopping containers: [2e461cc338f9a09ff93a1d26f9c81992089fd75b8f78289564fb63e4a9931e0e 79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1 1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a]
	I0114 10:38:36.160657  130375 ssh_runner.go:195] Run: which crictl
	I0114 10:38:36.163886  130375 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 2e461cc338f9a09ff93a1d26f9c81992089fd75b8f78289564fb63e4a9931e0e 79b25f35b620b82fa2886964e36320fcfbae8b5886877d2fba9daa58f1f107f1 1314aeb4b21d77fa2d83047166b42869dd32332ce9c9cf103a1e9b6adb29cc9a
	I0114 10:38:36.650086  130375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:38:36.727192  130375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:38:36.734488  130375 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Jan 14 10:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 14 10:37 /etc/kubernetes/scheduler.conf
	
	I0114 10:38:36.734539  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:38:36.741370  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:38:36.747971  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:38:36.754623  130375 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:38:36.754729  130375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 10:38:36.761563  130375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:38:36.768329  130375 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:38:36.768396  130375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 10:38:36.774780  130375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:38:36.781563  130375 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:38:36.781581  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:36.973702  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:37.742032  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:37.961156  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:38.011221  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:38.131693  130375 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:38:38.131756  130375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:38:38.144596  130375 api_server.go:71] duration metric: took 12.924067ms to wait for apiserver process to appear ...
	I0114 10:38:38.144647  130375 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:38:38.144661  130375 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0114 10:38:38.150219  130375 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0114 10:38:38.221645  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:38.221684  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:38.723441  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:38.723479  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:39.223074  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:39.223113  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:39.723804  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:39.723836  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0114 10:38:40.223490  130375 api_server.go:140] control plane version: v1.24.4
	W0114 10:38:40.223523  130375 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W0114 10:38:40.722537  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:41.222838  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:41.722762  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:42.222627  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:42.722171  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:43.223164  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:43.723196  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0114 10:38:44.223018  130375 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I0114 10:38:47.121508  130375 api_server.go:140] control plane version: v1.24.6
	I0114 10:38:47.121545  130375 api_server.go:130] duration metric: took 8.976889643s to wait for apiserver health ...
	I0114 10:38:47.121557  130375 cni.go:95] Creating CNI manager for ""
	I0114 10:38:47.121565  130375 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:38:47.123713  130375 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 10:38:47.125291  130375 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:38:47.130898  130375 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I0114 10:38:47.130926  130375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:38:47.232875  130375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:38:47.938528  130375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:38:47.945649  130375 system_pods.go:59] 7 kube-system pods found
	I0114 10:38:47.945684  130375 system_pods.go:61] "coredns-6d4b75cb6d-d9dwj" [f015d1c6-ed2c-4a5c-89ca-3aa07dc45194] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0114 10:38:47.945691  130375 system_pods.go:61] "etcd-test-preload-103656" [434495c0-10c7-4dc9-af84-dcec0b05fc6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 10:38:47.945698  130375 system_pods.go:61] "kindnet-dvmsq" [bf124d4e-df5c-446c-bb84-adb2312fb0d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0114 10:38:47.945706  130375 system_pods.go:61] "kube-controller-manager-test-preload-103656" [aea3925f-2ae7-4907-9061-948eb9d95520] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 10:38:47.945719  130375 system_pods.go:61] "kube-proxy-jnf8g" [c4b1d229-e57c-4245-b94c-5f87340ac132] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0114 10:38:47.945725  130375 system_pods.go:61] "kube-scheduler-test-preload-103656" [f63590d3-f765-4521-944f-10f98f6dc7a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 10:38:47.945730  130375 system_pods.go:61] "storage-provisioner" [0c70afb0-95ab-4b58-84ba-92f0658d439b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:38:47.945736  130375 system_pods.go:74] duration metric: took 7.183082ms to wait for pod list to return data ...
	I0114 10:38:47.945742  130375 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:38:47.947934  130375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:38:47.947957  130375 node_conditions.go:123] node cpu capacity is 8
	I0114 10:38:47.947970  130375 node_conditions.go:105] duration metric: took 2.221176ms to run NodePressure ...
	I0114 10:38:47.947986  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:38:48.173129  130375 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 10:38:48.177534  130375 kubeadm.go:778] kubelet initialised
	I0114 10:38:48.177560  130375 kubeadm.go:779] duration metric: took 4.40386ms waiting for restarted kubelet to initialise ...
	I0114 10:38:48.177570  130375 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:38:48.182970  130375 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace to be "Ready" ...
	I0114 10:38:50.230890  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:52.231044  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:54.231722  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:56.731766  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:38:59.230873  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:01.231296  130375 pod_ready.go:102] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:01.730700  130375 pod_ready.go:92] pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace has status "Ready":"True"
	I0114 10:39:01.730729  130375 pod_ready.go:81] duration metric: took 13.547734534s waiting for pod "coredns-6d4b75cb6d-d9dwj" in "kube-system" namespace to be "Ready" ...
	I0114 10:39:01.730745  130375 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-103656" in "kube-system" namespace to be "Ready" ...
	I0114 10:39:03.740578  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:05.740633  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:08.239656  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:10.240563  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:12.740125  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:14.740485  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:17.240667  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:19.739861  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:21.740285  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:23.740687  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:26.240438  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:28.240638  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:30.740382  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:33.240772  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:35.739862  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:37.739916  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:39.740222  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:42.240563  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:44.740477  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:47.239316  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:49.240030  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:51.240588  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:53.242557  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:55.739571  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:57.740011  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:39:59.740216  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:01.740337  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:03.740857  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:05.740910  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:08.240164  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:10.240244  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:12.242658  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:14.740962  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:17.240319  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:19.740715  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:22.240738  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:24.739464  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:26.740286  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:29.240377  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:31.240594  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:33.741864  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:36.242080  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:38.740232  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:41.239877  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:43.740351  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:46.242521  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:48.739651  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:50.739975  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:52.740088  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:54.740137  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:56.740342  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:40:59.240517  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:01.240708  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:03.739815  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:05.739861  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:07.740249  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:09.740580  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:12.239661  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:14.239867  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:16.240137  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:18.740329  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:21.240196  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:23.739494  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:25.740245  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:28.240321  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:30.739941  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:32.740332  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:35.240389  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:37.739611  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:39.740466  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:42.240551  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:44.740029  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:47.239176  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:49.240287  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:51.740204  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:53.740740  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:56.239972  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:41:58.739548  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:00.740693  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:03.239951  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:05.740324  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:08.239393  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:10.739405  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:12.740554  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:15.239561  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:17.739425  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:19.740051  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:22.240411  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:24.740035  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:27.240177  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:29.240744  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:31.739990  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:34.240535  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:36.739791  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:39.240303  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:41.740554  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:44.239594  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:46.240573  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:48.740328  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:50.740616  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:53.239696  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:55.240770  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:57.739934  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:42:59.740686  130375 pod_ready.go:102] pod "etcd-test-preload-103656" in "kube-system" namespace has status "Ready":"False"
	I0114 10:43:01.735198  130375 pod_ready.go:81] duration metric: took 4m0.00443583s waiting for pod "etcd-test-preload-103656" in "kube-system" namespace to be "Ready" ...
	E0114 10:43:01.735242  130375 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-103656" in "kube-system" namespace to be "Ready" (will not retry!)
	I0114 10:43:01.735266  130375 pod_ready.go:38] duration metric: took 4m13.557680414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:43:01.735298  130375 kubeadm.go:631] restartCluster took 4m25.6564342s
	W0114 10:43:01.735464  130375 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0114 10:43:01.735504  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0114 10:43:03.386915  130375 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.651383008s)
	I0114 10:43:03.386970  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:43:03.396144  130375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:43:03.402902  130375 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:43:03.402960  130375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:43:03.409317  130375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:43:03.409355  130375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:43:03.445845  130375 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0114 10:43:03.445947  130375 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:43:03.472908  130375 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:43:03.473085  130375 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:43:03.473148  130375 kubeadm.go:317] OS: Linux
	I0114 10:43:03.473233  130375 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:43:03.473338  130375 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:43:03.473421  130375 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:43:03.473483  130375 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:43:03.473539  130375 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:43:03.473595  130375 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:43:03.473649  130375 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:43:03.473710  130375 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:43:03.473792  130375 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:43:03.545232  130375 kubeadm.go:317] W0114 10:43:03.440995    6524 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:43:03.545502  130375 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:43:03.545641  130375 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:43:03.545723  130375 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0114 10:43:03.545781  130375 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0114 10:43:03.545843  130375 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0114 10:43:03.545950  130375 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0114 10:43:03.546035  130375 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 10:43:03.546248  130375 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.440995    6524 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 10:43:03.546284  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0114 10:43:03.874686  130375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:43:03.884299  130375 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:43:03.884355  130375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:43:03.891278  130375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:43:03.891322  130375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:43:03.924537  130375 kubeadm.go:317] W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:43:03.955490  130375 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:43:04.017449  130375 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:43:04.017559  130375 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0114 10:43:04.017608  130375 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0114 10:43:04.017664  130375 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0114 10:43:04.017800  130375 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0114 10:43:04.017909  130375 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 10:43:04.019616  130375 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0114 10:43:04.019700  130375 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:43:04.019798  130375 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:43:04.019933  130375 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:43:04.020004  130375 kubeadm.go:317] OS: Linux
	I0114 10:43:04.020056  130375 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:43:04.020136  130375 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:43:04.020206  130375 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:43:04.020268  130375 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:43:04.020310  130375 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:43:04.020369  130375 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:43:04.020438  130375 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:43:04.020500  130375 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:43:04.020561  130375 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:43:04.020620  130375 kubeadm.go:398] StartCluster complete in 4m28.059804219s
	I0114 10:43:04.020658  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:43:04.020706  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:43:04.043514  130375 cri.go:87] found id: ""
	I0114 10:43:04.043533  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.043542  130375 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:43:04.043548  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:43:04.043601  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:43:04.066568  130375 cri.go:87] found id: ""
	I0114 10:43:04.066589  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.066596  130375 logs.go:276] No container was found matching "etcd"
	I0114 10:43:04.066602  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:43:04.066643  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:43:04.089342  130375 cri.go:87] found id: ""
	I0114 10:43:04.089369  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.089378  130375 logs.go:276] No container was found matching "coredns"
	I0114 10:43:04.089386  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:43:04.089436  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:43:04.112375  130375 cri.go:87] found id: ""
	I0114 10:43:04.112402  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.112414  130375 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:43:04.112423  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:43:04.112475  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:43:04.134889  130375 cri.go:87] found id: ""
	I0114 10:43:04.134917  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.134926  130375 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:43:04.134935  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:43:04.134978  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:43:04.157596  130375 cri.go:87] found id: ""
	I0114 10:43:04.157621  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.157627  130375 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:43:04.157634  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:43:04.157674  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:43:04.179584  130375 cri.go:87] found id: ""
	I0114 10:43:04.179612  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.179621  130375 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:43:04.179630  130375 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:43:04.179709  130375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:43:04.201868  130375 cri.go:87] found id: ""
	I0114 10:43:04.201894  130375 logs.go:274] 0 containers: []
	W0114 10:43:04.201903  130375 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:43:04.201916  130375 logs.go:123] Gathering logs for kubelet ...
	I0114 10:43:04.201932  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:43:04.270344  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050446    4197 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.270544  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050472    4197 projected.go:192] Error preparing data for projected volume kube-api-access-8zqx7 for pod kube-system/kindnet-dvmsq: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.270731  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050496    4197 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.270877  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050536    4197 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271282  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050557    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf124d4e-df5c-446c-bb84-adb2312fb0d7-kube-api-access-8zqx7 podName:bf124d4e-df5c-446c-bb84-adb2312fb0d7 nodeName:}" failed. No retries permitted until 2023-01-14 10:38:51.050533394 +0000 UTC m=+13.089583874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8zqx7" (UniqueName: "kubernetes.io/projected/bf124d4e-df5c-446c-bb84-adb2312fb0d7-kube-api-access-8zqx7") pod "kindnet-dvmsq" (UID: "bf124d4e-df5c-446c-bb84-adb2312fb0d7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271443  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050563    4197 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271586  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050586    4197 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271775  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050598    4197 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.271958  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050630    4197 projected.go:192] Error preparing data for projected volume kube-api-access-5jwgd for pod kube-system/coredns-6d4b75cb6d-d9dwj: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.272144  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050647    4197 projected.go:192] Error preparing data for projected volume kube-api-access-2ptl2 for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.272554  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050689    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f015d1c6-ed2c-4a5c-89ca-3aa07dc45194-kube-api-access-5jwgd podName:f015d1c6-ed2c-4a5c-89ca-3aa07dc45194 nodeName:}" failed. No retries permitted until 2023-01-14 10:38:49.050670598 +0000 UTC m=+11.089721091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5jwgd" (UniqueName: "kubernetes.io/projected/f015d1c6-ed2c-4a5c-89ca-3aa07dc45194-kube-api-access-5jwgd") pod "coredns-6d4b75cb6d-d9dwj" (UID: "f015d1c6-ed2c-4a5c-89ca-3aa07dc45194") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.272980  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050713    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c70afb0-95ab-4b58-84ba-92f0658d439b-kube-api-access-2ptl2 podName:0c70afb0-95ab-4b58-84ba-92f0658d439b nodeName:}" failed. No retries permitted until 2023-01-14 10:38:51.050699422 +0000 UTC m=+13.089749893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2ptl2" (UniqueName: "kubernetes.io/projected/0c70afb0-95ab-4b58-84ba-92f0658d439b-kube-api-access-2ptl2") pod "storage-provisioner" (UID: "0c70afb0-95ab-4b58-84ba-92f0658d439b") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.273167  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050777    4197 projected.go:192] Error preparing data for projected volume kube-api-access-7dfp9 for pod kube-system/kube-proxy-jnf8g: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	W0114 10:43:04.273586  130375 logs.go:138] Found kubelet problem: Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050844    4197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b1d229-e57c-4245-b94c-5f87340ac132-kube-api-access-7dfp9 podName:c4b1d229-e57c-4245-b94c-5f87340ac132 nodeName:}" failed. No retries permitted until 2023-01-14 10:38:51.050827219 +0000 UTC m=+13.089877705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7dfp9" (UniqueName: "kubernetes.io/projected/c4b1d229-e57c-4245-b94c-5f87340ac132-kube-api-access-7dfp9") pod "kube-proxy-jnf8g" (UID: "c4b1d229-e57c-4245-b94c-5f87340ac132") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.293959  130375 logs.go:123] Gathering logs for dmesg ...
	I0114 10:43:04.293985  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:43:04.307811  130375 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:43:04.307842  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:43:04.500460  130375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:43:04.500487  130375 logs.go:123] Gathering logs for containerd ...
	I0114 10:43:04.500502  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:43:04.559152  130375 logs.go:123] Gathering logs for container status ...
	I0114 10:43:04.559197  130375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0114 10:43:04.584531  130375 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0114 10:43:04.584568  130375 out.go:239] * 
	W0114 10:43:04.584725  130375 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:43:04.584753  130375 out.go:239] * 
	W0114 10:43:04.585563  130375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:43:04.588003  130375 out.go:177] X Problems detected in kubelet:
	I0114 10:43:04.589229  130375 out.go:177]   Jan 14 10:38:47 test-preload-103656 kubelet[4197]: W0114 10:38:47.050446    4197 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.590530  130375 out.go:177]   Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050472    4197 projected.go:192] Error preparing data for projected volume kube-api-access-8zqx7 for pod kube-system/kindnet-dvmsq: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-103656" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.591776  130375 out.go:177]   Jan 14 10:38:47 test-preload-103656 kubelet[4197]: E0114 10:38:47.050496    4197 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-103656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-103656' and this object
	I0114 10:43:04.593884  130375 out.go:177] 
	W0114 10:43:04.595189  130375 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0114 10:43:03.923877    6790 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:43:04.595274  130375 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0114 10:43:04.595330  130375 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0114 10:43:04.596623  130375 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2023-01-14 10:36:58 UTC, end at Sat 2023-01-14 10:43:05 UTC. --
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.674711171Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.690257008Z" level=info msg="StopPodSandbox for \"this\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.690313002Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.707329780Z" level=info msg="StopPodSandbox for \"endpoint\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.707372630Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.722572296Z" level=info msg="StopPodSandbox for \"is\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.722623890Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.738971628Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.739015388Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.754621211Z" level=info msg="StopPodSandbox for \"please\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.754670990Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.771311033Z" level=info msg="StopPodSandbox for \"consider\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.771369731Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.788384386Z" level=info msg="StopPodSandbox for \"using\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.788434614Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.804321814Z" level=info msg="StopPodSandbox for \"full\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.804366846Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.821158675Z" level=info msg="StopPodSandbox for \"URL\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.821209950Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.837240149Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.837294434Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.853707838Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.853762258Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.869474978Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 14 10:43:03 test-preload-103656 containerd[3025]: time="2023-01-14T10:43:03.869523157Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.007355] FS-Cache: O-key=[8] '87a00f0200000000'
	[  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006684] FS-Cache: N-cookie d=0000000092f5ea2a{9p.inode} n=00000000e1d5d334
	[  +0.008751] FS-Cache: N-key=[8] '87a00f0200000000'
	[  +0.343217] FS-Cache: Duplicate cookie detected
	[  +0.004671] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=0000000092f5ea2a{9p.inode} n=00000000d68b2e5d
	[  +0.007346] FS-Cache: O-key=[8] '91a00f0200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008041] FS-Cache: N-cookie d=0000000092f5ea2a{9p.inode} n=00000000e2277570
	[  +0.008761] FS-Cache: N-key=[8] '91a00f0200000000'
	[Jan14 10:21] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan14 10:32] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000006] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +1.007675] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000005] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +2.011857] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000006] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +4.031727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000030] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[  +8.195348] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-8d666bf786b0
	[  +0.000007] ll header: 00000000: 02 42 59 8a 0a b1 02 42 c0 a8 3a 02 08 00
	[Jan14 10:38] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000380] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010776] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  10:43:05 up  1:25,  0 users,  load average: 0.05, 0.48, 0.66
	Linux test-preload-103656 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:36:58 UTC, end at Sat 2023-01-14 10:43:05 UTC. --
	Jan 14 10:41:22 test-preload-103656 kubelet[4197]: I0114 10:41:22.224474    4197 scope.go:110] "RemoveContainer" containerID="d078b7ef1c8b566b2fabdf1efcf16bbd87f7de5dfe9692b470b6f871531e6af5"
	Jan 14 10:41:22 test-preload-103656 kubelet[4197]: I0114 10:41:22.588888    4197 scope.go:110] "RemoveContainer" containerID="d078b7ef1c8b566b2fabdf1efcf16bbd87f7de5dfe9692b470b6f871531e6af5"
	Jan 14 10:41:22 test-preload-103656 kubelet[4197]: I0114 10:41:22.589239    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:41:22 test-preload-103656 kubelet[4197]: E0114 10:41:22.589722    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:41:29 test-preload-103656 kubelet[4197]: I0114 10:41:29.031882    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:41:29 test-preload-103656 kubelet[4197]: E0114 10:41:29.032221    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:41:29 test-preload-103656 kubelet[4197]: I0114 10:41:29.603257    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:41:29 test-preload-103656 kubelet[4197]: E0114 10:41:29.603573    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:41:31 test-preload-103656 kubelet[4197]: I0114 10:41:31.120131    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:41:31 test-preload-103656 kubelet[4197]: E0114 10:41:31.120458    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:41:43 test-preload-103656 kubelet[4197]: I0114 10:41:43.224081    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:41:43 test-preload-103656 kubelet[4197]: E0114 10:41:43.224605    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:41:55 test-preload-103656 kubelet[4197]: I0114 10:41:55.224106    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:41:55 test-preload-103656 kubelet[4197]: E0114 10:41:55.224455    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:42:07 test-preload-103656 kubelet[4197]: I0114 10:42:07.224291    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:42:07 test-preload-103656 kubelet[4197]: E0114 10:42:07.224669    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:42:20 test-preload-103656 kubelet[4197]: I0114 10:42:20.224840    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:42:20 test-preload-103656 kubelet[4197]: E0114 10:42:20.225385    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:42:35 test-preload-103656 kubelet[4197]: I0114 10:42:35.224145    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:42:35 test-preload-103656 kubelet[4197]: E0114 10:42:35.224484    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:42:47 test-preload-103656 kubelet[4197]: I0114 10:42:47.224437    4197 scope.go:110] "RemoveContainer" containerID="22ca20c859a6f85f960d3fe0c256666b27c8694e0f60b7c9f6fbd4b9aa4235bd"
	Jan 14 10:42:47 test-preload-103656 kubelet[4197]: E0114 10:42:47.224763    4197 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-103656_kube-system(b5ce6aaa8feda8e03d2ece032691651a)\"" pod="kube-system/etcd-test-preload-103656" podUID=b5ce6aaa8feda8e03d2ece032691651a
	Jan 14 10:43:01 test-preload-103656 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jan 14 10:43:01 test-preload-103656 systemd[1]: kubelet.service: Succeeded.
	Jan 14 10:43:01 test-preload-103656 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 10:43:05.629248  135071 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-103656 -n test-preload-103656
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-103656 -n test-preload-103656: exit status 2 (345.282934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-103656" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-103656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-103656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-103656: (2.052948877s)
--- FAIL: TestPreload (371.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (580.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-104742 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0114 10:47:53.110912   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-104742 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (51.998211785s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-104742
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-104742: (4.426795421s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-104742 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-104742 status --format={{.Host}}: exit status 7 (128.989528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-104742 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-104742 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m39.368823126s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-104742] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-104742 in cluster kubernetes-upgrade-104742
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-104742" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12529]: E0114 10:56:28.256460   12529 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12539]: E0114 10:56:28.983861   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:56:29 kubernetes-upgrade-104742 kubelet[12551]: E0114 10:56:29.737789   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:48:39.124161  189998 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:48:39.124406  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:48:39.124419  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:48:39.124426  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:48:39.124563  189998 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:48:39.125302  189998 out.go:303] Setting JSON to false
	I0114 10:48:39.127279  189998 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5467,"bootTime":1673687853,"procs":858,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:48:39.127367  189998 start.go:135] virtualization: kvm guest
	I0114 10:48:39.130566  189998 out.go:177] * [kubernetes-upgrade-104742] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:48:39.132242  189998 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:48:39.132217  189998 notify.go:220] Checking for updates...
	I0114 10:48:39.135416  189998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:48:39.137122  189998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:48:39.138819  189998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:48:39.140431  189998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:48:39.142746  189998 config.go:180] Loaded profile config "kubernetes-upgrade-104742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0114 10:48:39.143347  189998 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:48:39.178253  189998 docker.go:138] docker version: linux-20.10.22
	I0114 10:48:39.178372  189998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:48:39.293060  189998 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:70 SystemTime:2023-01-14 10:48:39.200469098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:48:39.293157  189998 docker.go:255] overlay module found
	I0114 10:48:39.295995  189998 out.go:177] * Using the docker driver based on existing profile
	I0114 10:48:39.297669  189998 start.go:294] selected driver: docker
	I0114 10:48:39.297698  189998 start.go:838] validating driver "docker" against &{Name:kubernetes-upgrade-104742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-104742 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:48:39.297797  189998 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:48:39.298691  189998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:48:39.424789  189998 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:73 SystemTime:2023-01-14 10:48:39.323385068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:48:39.425057  189998 cni.go:95] Creating CNI manager for ""
	I0114 10:48:39.425074  189998 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:48:39.425096  189998 start_flags.go:319] config:
	{Name:kubernetes-upgrade-104742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:48:39.427902  189998 out.go:177] * Starting control plane node kubernetes-upgrade-104742 in cluster kubernetes-upgrade-104742
	I0114 10:48:39.429285  189998 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:48:39.430773  189998 out.go:177] * Pulling base image ...
	I0114 10:48:39.432227  189998 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:48:39.432286  189998 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:48:39.432301  189998 cache.go:57] Caching tarball of preloaded images
	I0114 10:48:39.432253  189998 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:48:39.432541  189998 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:48:39.432561  189998 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:48:39.432721  189998 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/config.json ...
	I0114 10:48:39.470460  189998 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:48:39.470489  189998 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:48:39.470519  189998 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:48:39.470566  189998 start.go:364] acquiring machines lock for kubernetes-upgrade-104742: {Name:mk8f6570bb97d0587d382222dc795deb59ce2c54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:48:39.470697  189998 start.go:368] acquired machines lock for "kubernetes-upgrade-104742" in 93.138µs
	I0114 10:48:39.470722  189998 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:48:39.470729  189998 fix.go:55] fixHost starting: 
	I0114 10:48:39.471015  189998 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104742 --format={{.State.Status}}
	I0114 10:48:39.499356  189998 fix.go:103] recreateIfNeeded on kubernetes-upgrade-104742: state=Stopped err=<nil>
	W0114 10:48:39.499393  189998 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:48:39.502124  189998 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-104742" ...
	I0114 10:48:39.503596  189998 cli_runner.go:164] Run: docker start kubernetes-upgrade-104742
	I0114 10:48:39.991232  189998 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104742 --format={{.State.Status}}
	I0114 10:48:40.036005  189998 kic.go:426] container "kubernetes-upgrade-104742" state is running.
	I0114 10:48:40.036420  189998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104742
	I0114 10:48:40.082591  189998 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/config.json ...
	I0114 10:48:40.082884  189998 machine.go:88] provisioning docker machine ...
	I0114 10:48:40.082910  189998 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-104742"
	I0114 10:48:40.082957  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:40.115021  189998 main.go:134] libmachine: Using SSH client type: native
	I0114 10:48:40.115184  189998 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0114 10:48:40.115209  189998 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-104742 && echo "kubernetes-upgrade-104742" | sudo tee /etc/hostname
	I0114 10:48:40.115950  189998 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47528->127.0.0.1:32977: read: connection reset by peer
	I0114 10:48:43.243862  189998 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104742
	
	I0114 10:48:43.243947  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:43.270830  189998 main.go:134] libmachine: Using SSH client type: native
	I0114 10:48:43.270977  189998 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0114 10:48:43.270995  189998 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-104742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-104742/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-104742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:48:43.387514  189998 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:48:43.387541  189998 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:48:43.387559  189998 ubuntu.go:177] setting up certificates
	I0114 10:48:43.387566  189998 provision.go:83] configureAuth start
	I0114 10:48:43.387617  189998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104742
	I0114 10:48:43.410764  189998 provision.go:138] copyHostCerts
	I0114 10:48:43.410829  189998 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:48:43.410845  189998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:48:43.410916  189998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:48:43.411016  189998 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:48:43.411029  189998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:48:43.411065  189998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:48:43.411124  189998 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:48:43.411135  189998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:48:43.411166  189998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:48:43.411232  189998 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-104742 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-104742]
	I0114 10:48:43.895996  189998 provision.go:172] copyRemoteCerts
	I0114 10:48:43.896066  189998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:48:43.896097  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:43.919345  189998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/kubernetes-upgrade-104742/id_rsa Username:docker}
	I0114 10:48:44.006815  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0114 10:48:44.028173  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 10:48:44.049293  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:48:44.069569  189998 provision.go:86] duration metric: configureAuth took 681.98679ms
	I0114 10:48:44.069597  189998 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:48:44.069746  189998 config.go:180] Loaded profile config "kubernetes-upgrade-104742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:48:44.069756  189998 machine.go:91] provisioned docker machine in 3.986859071s
	I0114 10:48:44.069762  189998 start.go:300] post-start starting for "kubernetes-upgrade-104742" (driver="docker")
	I0114 10:48:44.069769  189998 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:48:44.069805  189998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:48:44.069839  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:44.093560  189998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/kubernetes-upgrade-104742/id_rsa Username:docker}
	I0114 10:48:44.184821  189998 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:48:44.187727  189998 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:48:44.187752  189998 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:48:44.187761  189998 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:48:44.187767  189998 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:48:44.187778  189998 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:48:44.187833  189998 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:48:44.187913  189998 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:48:44.188030  189998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:48:44.196007  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:48:44.214528  189998 start.go:303] post-start completed in 144.749516ms
	I0114 10:48:44.214614  189998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:48:44.214654  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:44.239409  189998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/kubernetes-upgrade-104742/id_rsa Username:docker}
	I0114 10:48:44.320374  189998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:48:44.324488  189998 fix.go:57] fixHost completed within 4.853754113s
	I0114 10:48:44.324509  189998 start.go:83] releasing machines lock for "kubernetes-upgrade-104742", held for 4.853798452s
	I0114 10:48:44.324589  189998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104742
	I0114 10:48:44.348085  189998 ssh_runner.go:195] Run: cat /version.json
	I0114 10:48:44.348134  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:44.348195  189998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:48:44.348249  189998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104742
	I0114 10:48:44.372165  189998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/kubernetes-upgrade-104742/id_rsa Username:docker}
	I0114 10:48:44.372637  189998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/kubernetes-upgrade-104742/id_rsa Username:docker}
	I0114 10:48:44.454956  189998 ssh_runner.go:195] Run: systemctl --version
	I0114 10:48:44.494978  189998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:48:44.508205  189998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:48:44.522355  189998 docker.go:189] disabling docker service ...
	I0114 10:48:44.522402  189998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:48:44.536062  189998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:48:44.545699  189998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:48:44.636930  189998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:48:44.735399  189998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:48:44.751661  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:48:44.767516  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:48:44.779003  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:48:44.787255  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:48:44.795843  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:48:44.804634  189998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:48:44.811102  189998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:48:44.818031  189998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:48:44.902895  189998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:48:44.990459  189998 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:48:44.990534  189998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:48:44.994844  189998 start.go:472] Will wait 60s for crictl version
	I0114 10:48:44.994919  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:48:44.998285  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:48:45.030001  189998 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T10:48:45Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0114 10:48:56.078164  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:48:56.101741  189998 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:48:56.101808  189998 ssh_runner.go:195] Run: containerd --version
	I0114 10:48:56.125761  189998 ssh_runner.go:195] Run: containerd --version
	I0114 10:48:56.153091  189998 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:48:56.154625  189998 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:48:56.177677  189998 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0114 10:48:56.180949  189998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:48:56.192364  189998 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0114 10:48:56.193886  189998 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:48:56.193963  189998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:48:56.218217  189998 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.3". assuming images are not preloaded.
	I0114 10:48:56.218296  189998 ssh_runner.go:195] Run: which lz4
	I0114 10:48:56.221442  189998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0114 10:48:56.224359  189998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0114 10:48:56.224392  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (669534256 bytes)
	I0114 10:48:57.364968  189998 containerd.go:496] Took 1.143553 seconds to copy over tarball
	I0114 10:48:57.365048  189998 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0114 10:48:59.787793  189998 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.422721538s)
	I0114 10:48:59.787818  189998 containerd.go:503] Took 2.422826 seconds t extract the tarball
	I0114 10:48:59.787827  189998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0114 10:48:59.880275  189998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:48:59.955650  189998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:49:00.033327  189998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:49:00.061079  189998 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0114 10:49:00.061518  189998 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.3
	I0114 10:49:00.061194  189998 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I0114 10:49:00.061580  189998 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 10:49:00.061172  189998 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:49:00.061746  189998 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.3
	I0114 10:49:00.061875  189998 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.3
	I0114 10:49:00.062231  189998 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I0114 10:49:00.062351  189998 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0114 10:49:00.063420  189998 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.3: Error: No such image: registry.k8s.io/kube-proxy:v1.25.3
	I0114 10:49:00.063856  189998 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.3: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.3
	I0114 10:49:00.063908  189998 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:49:00.064081  189998 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I0114 10:49:00.064170  189998 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I0114 10:49:00.064232  189998 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.3: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.3
	I0114 10:49:00.064297  189998 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.3: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 10:49:00.064388  189998 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0114 10:49:00.291890  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I0114 10:49:00.299390  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.3"
	I0114 10:49:00.310925  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.3"
	I0114 10:49:00.318700  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.3"
	I0114 10:49:00.327163  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.3"
	I0114 10:49:00.337479  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I0114 10:49:00.343143  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I0114 10:49:00.897679  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0114 10:49:00.932518  189998 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I0114 10:49:00.932570  189998 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I0114 10:49:00.932607  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:00.935446  189998 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.3" does not exist at hash "6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912" in container runtime
	I0114 10:49:00.935496  189998 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.3
	I0114 10:49:00.935540  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.027128  189998 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.3" needs transfer: "registry.k8s.io/kube-proxy:v1.25.3" does not exist at hash "beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041" in container runtime
	I0114 10:49:01.027240  189998 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.3
	I0114 10:49:01.027289  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.039238  189998 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.3" does not exist at hash "0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0" in container runtime
	I0114 10:49:01.039281  189998 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.3
	I0114 10:49:01.039322  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.049737  189998 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.3" does not exist at hash "60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a" in container runtime
	I0114 10:49:01.049790  189998 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 10:49:01.049832  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.123135  189998 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0114 10:49:01.123207  189998 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0114 10:49:01.123267  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.131418  189998 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I0114 10:49:01.131468  189998 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I0114 10:49:01.131520  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.243282  189998 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0114 10:49:01.243348  189998 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:49:01.243375  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I0114 10:49:01.243415  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3
	I0114 10:49:01.243448  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3
	I0114 10:49:01.243463  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3
	I0114 10:49:01.243375  189998 ssh_runner.go:195] Run: which crictl
	I0114 10:49:01.243481  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 10:49:01.243524  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0114 10:49:01.243537  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I0114 10:49:01.678835  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I0114 10:49:01.678935  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I0114 10:49:01.681553  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
	I0114 10:49:01.681621  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
	I0114 10:49:01.681650  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0114 10:49:01.681709  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3
	I0114 10:49:01.681949  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
	I0114 10:49:01.682033  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0114 10:49:01.684905  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
	I0114 10:49:01.684956  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I0114 10:49:01.684985  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I0114 10:49:01.684907  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0114 10:49:01.685037  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0114 10:49:01.684923  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I0114 10:49:01.685102  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I0114 10:49:01.685127  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I0114 10:49:01.684928  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.3': No such file or directory
	I0114 10:49:01.685153  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 --> /var/lib/minikube/images/kube-proxy_v1.25.3 (20268032 bytes)
	I0114 10:49:01.684964  189998 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:49:01.685520  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.3': No such file or directory
	I0114 10:49:01.685548  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 --> /var/lib/minikube/images/kube-apiserver_v1.25.3 (34241024 bytes)
	I0114 10:49:01.688389  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.3': No such file or directory
	I0114 10:49:01.688414  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 --> /var/lib/minikube/images/kube-scheduler_v1.25.3 (15801856 bytes)
	I0114 10:49:01.695916  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I0114 10:49:01.695947  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I0114 10:49:01.696005  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.3': No such file or directory
	I0114 10:49:01.696023  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 --> /var/lib/minikube/images/kube-controller-manager_v1.25.3 (31264768 bytes)
	I0114 10:49:01.696078  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0114 10:49:01.696098  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0114 10:49:01.732045  189998 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I0114 10:49:01.732111  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I0114 10:49:01.749946  189998 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0114 10:49:01.750051  189998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:49:01.957188  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I0114 10:49:01.957231  189998 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0114 10:49:01.957265  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0114 10:49:01.975969  189998 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0114 10:49:01.976030  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0114 10:49:02.962163  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 from cache
	I0114 10:49:02.962208  189998 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.3
	I0114 10:49:02.962261  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3
	I0114 10:49:03.924177  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 from cache
	I0114 10:49:03.924214  189998 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0114 10:49:03.924279  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I0114 10:49:05.368317  189998 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3: (1.444001769s)
	I0114 10:49:05.368346  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0114 10:49:05.368370  189998 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:49:05.368411  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0114 10:49:05.754961  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0114 10:49:05.755016  189998 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0114 10:49:05.755062  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0114 10:49:07.088327  189998 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3: (1.33323641s)
	I0114 10:49:07.088356  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 from cache
	I0114 10:49:07.088378  189998 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0114 10:49:07.088447  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0114 10:49:08.255221  189998 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3: (1.166741167s)
	I0114 10:49:08.255251  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 from cache
	I0114 10:49:08.255273  189998 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I0114 10:49:08.255309  189998 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I0114 10:49:11.570575  189998 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (3.315233979s)
	I0114 10:49:11.570609  189998 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15642-3818/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I0114 10:49:11.570634  189998 cache_images.go:123] Successfully loaded all cached images
	I0114 10:49:11.570639  189998 cache_images.go:92] LoadImages completed in 11.509531901s
	I0114 10:49:11.570698  189998 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:49:11.593830  189998 cni.go:95] Creating CNI manager for ""
	I0114 10:49:11.593848  189998 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:49:11.593863  189998 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:49:11.593876  189998 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-104742 NodeName:kubernetes-upgrade-104742 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:49:11.594036  189998 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-104742"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:49:11.594126  189998 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-104742 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:49:11.594170  189998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:49:11.601702  189998 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:49:11.601771  189998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:49:11.609019  189998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (549 bytes)
	I0114 10:49:11.621660  189998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:49:11.634522  189998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0114 10:49:11.647322  189998 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:49:11.650435  189998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:49:11.659580  189998 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742 for IP: 192.168.67.2
	I0114 10:49:11.659737  189998 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:49:11.659781  189998 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:49:11.659846  189998 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/client.key
	I0114 10:49:11.659903  189998 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/apiserver.key.c7fa3a9e
	I0114 10:49:11.659939  189998 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/proxy-client.key
	I0114 10:49:11.660023  189998 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:49:11.660050  189998 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:49:11.660060  189998 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:49:11.660080  189998 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:49:11.660101  189998 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:49:11.660125  189998 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:49:11.660159  189998 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:49:11.660779  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:49:11.678160  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:49:11.695229  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:49:11.711958  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 10:49:11.729046  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:49:11.746043  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:49:11.762797  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:49:11.780466  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:49:11.797661  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:49:11.814748  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:49:11.832387  189998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:49:11.849775  189998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:49:11.862135  189998 ssh_runner.go:195] Run: openssl version
	I0114 10:49:11.866724  189998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:49:11.873848  189998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:49:11.877108  189998 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:49:11.877155  189998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:49:11.882074  189998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:49:11.888932  189998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:49:11.896299  189998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:49:11.899292  189998 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:49:11.899351  189998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:49:11.904108  189998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:49:11.910837  189998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:49:11.918093  189998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:49:11.921079  189998 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:49:11.921127  189998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:49:11.925890  189998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:49:11.932515  189998 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-104742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104742 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:49:11.932602  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:49:11.932633  189998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:49:11.956997  189998 cri.go:87] found id: ""
	I0114 10:49:11.957050  189998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:49:11.964008  189998 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:49:11.964038  189998 kubeadm.go:627] restartCluster start
	I0114 10:49:11.964079  189998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:49:11.970581  189998 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:49:11.971226  189998 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-104742" does not appear in /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:49:11.971625  189998 kubeconfig.go:146] "kubernetes-upgrade-104742" context is missing from /home/jenkins/minikube-integration/15642-3818/kubeconfig - will repair!
	I0114 10:49:11.972251  189998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:49:11.973158  189998 kapi.go:59] client config for kubernetes-upgrade-104742: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kubernetes-upgrade-104742/client.key", CAFile:"/home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:49:11.973687  189998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:49:11.980939  189998 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-01-14 10:48:01.804969148 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-01-14 10:49:11.641755383 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-104742
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.25.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0114 10:49:11.980961  189998 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:49:11.980974  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:49:11.981019  189998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:49:12.004608  189998 cri.go:87] found id: ""
	I0114 10:49:12.004663  189998 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:49:12.014700  189998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:49:12.021596  189998 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Jan 14 10:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Jan 14 10:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5819 Jan 14 10:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Jan 14 10:48 /etc/kubernetes/scheduler.conf
	
	I0114 10:49:12.021652  189998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:49:12.028352  189998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:49:12.035143  189998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:49:12.042135  189998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:49:12.048902  189998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:49:12.055762  189998 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:49:12.055786  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:49:12.097266  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:49:12.800629  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:49:12.929449  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:49:12.976453  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:49:13.022846  189998 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:49:13.022894  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:13.531340  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:14.031890  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:14.532239  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:15.031698  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:15.531803  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:16.032212  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:16.532079  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:17.031441  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:17.531914  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:18.032234  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:18.531269  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:19.031260  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:19.531576  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:20.031869  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:20.531837  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:21.031433  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:21.531299  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:22.032018  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:22.531220  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:23.031327  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:23.531825  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:24.032281  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:24.531948  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:25.031453  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:25.532172  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:26.031253  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:26.531818  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:27.032126  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:27.531566  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:28.031496  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:28.531542  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:29.031849  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:29.531432  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:30.031560  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:30.531370  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:31.031923  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:31.532241  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:32.031788  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:32.531298  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:33.031312  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:33.531971  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:34.031549  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:34.531942  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:35.032112  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:35.531471  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:36.032064  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:36.531348  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:37.031426  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:37.531973  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:38.031561  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:38.531798  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:39.031522  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:39.531481  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:40.031407  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:40.531350  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:41.031987  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:41.531403  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:42.031824  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:42.531243  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:43.031652  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:43.531979  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:44.031623  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:44.532252  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:45.031959  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:45.531537  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:46.031937  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:46.531812  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:47.031946  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:47.531340  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:48.032002  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:48.531406  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:49.031241  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:49.532172  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:50.031547  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:50.531657  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:51.031271  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:51.531482  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:52.032179  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:52.531295  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:53.031421  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:53.532224  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:54.032065  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:54.532233  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:55.031633  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:55.531781  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:56.031316  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:56.531805  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:57.031788  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:57.531788  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:58.032207  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:58.531682  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:59.031583  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:49:59.531863  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:00.031764  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:00.531212  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:01.032126  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:01.531254  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:02.031802  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:02.532053  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:03.031450  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:03.531905  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:04.031813  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:04.532186  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:05.031653  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:05.531793  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:06.031418  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:06.532035  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:07.031446  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:07.532165  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:08.031950  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:08.531828  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:09.031320  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:09.531880  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:10.032257  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:10.531203  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:11.031219  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:11.532188  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:12.031795  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:12.531514  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:13.031778  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:50:13.031856  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:50:13.063399  189998 cri.go:87] found id: ""
	I0114 10:50:13.063432  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.063442  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:50:13.063450  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:50:13.063506  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:50:13.088370  189998 cri.go:87] found id: ""
	I0114 10:50:13.088399  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.088407  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:50:13.088414  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:50:13.088465  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:50:13.115132  189998 cri.go:87] found id: ""
	I0114 10:50:13.115161  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.115167  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:50:13.115173  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:50:13.115224  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:50:13.145269  189998 cri.go:87] found id: ""
	I0114 10:50:13.145297  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.145303  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:50:13.145309  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:50:13.145360  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:50:13.168802  189998 cri.go:87] found id: ""
	I0114 10:50:13.168827  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.168836  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:50:13.168843  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:50:13.168896  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:50:13.192868  189998 cri.go:87] found id: ""
	I0114 10:50:13.192899  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.192908  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:50:13.192916  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:50:13.192964  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:50:13.220826  189998 cri.go:87] found id: ""
	I0114 10:50:13.220851  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.220860  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:50:13.220868  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:50:13.220920  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:50:13.245474  189998 cri.go:87] found id: ""
	I0114 10:50:13.245497  189998 logs.go:274] 0 containers: []
	W0114 10:50:13.245503  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:50:13.245512  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:50:13.245526  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:50:13.303864  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:23 kubernetes-upgrade-104742 kubelet[1398]: E0114 10:49:23.484178    1398 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.304446  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:24 kubernetes-upgrade-104742 kubelet[1413]: E0114 10:49:24.233867    1413 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.304874  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:24 kubernetes-upgrade-104742 kubelet[1426]: E0114 10:49:24.984367    1426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.305290  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:25 kubernetes-upgrade-104742 kubelet[1443]: E0114 10:49:25.748583    1443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.305789  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:26 kubernetes-upgrade-104742 kubelet[1454]: E0114 10:49:26.493304    1454 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.306167  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:27 kubernetes-upgrade-104742 kubelet[1468]: E0114 10:49:27.236681    1468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.306667  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:27 kubernetes-upgrade-104742 kubelet[1481]: E0114 10:49:27.983235    1481 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.307226  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:28 kubernetes-upgrade-104742 kubelet[1495]: E0114 10:49:28.735222    1495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.307743  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:29 kubernetes-upgrade-104742 kubelet[1509]: E0114 10:49:29.500465    1509 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.308117  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:30 kubernetes-upgrade-104742 kubelet[1523]: E0114 10:49:30.245158    1523 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.308550  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:31 kubernetes-upgrade-104742 kubelet[1536]: E0114 10:49:31.007079    1536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.308940  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:31 kubernetes-upgrade-104742 kubelet[1552]: E0114 10:49:31.748137    1552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.309316  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:32 kubernetes-upgrade-104742 kubelet[1565]: E0114 10:49:32.493722    1565 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.309834  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:33 kubernetes-upgrade-104742 kubelet[1580]: E0114 10:49:33.244295    1580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.310268  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:33 kubernetes-upgrade-104742 kubelet[1592]: E0114 10:49:33.990870    1592 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.310771  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:34 kubernetes-upgrade-104742 kubelet[1607]: E0114 10:49:34.740075    1607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.311303  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:35 kubernetes-upgrade-104742 kubelet[1620]: E0114 10:49:35.484369    1620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.311900  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:36 kubernetes-upgrade-104742 kubelet[1635]: E0114 10:49:36.237068    1635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.312460  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:36 kubernetes-upgrade-104742 kubelet[1647]: E0114 10:49:36.985970    1647 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.313002  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:37 kubernetes-upgrade-104742 kubelet[1662]: E0114 10:49:37.738647    1662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.313358  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:38 kubernetes-upgrade-104742 kubelet[1675]: E0114 10:49:38.494999    1675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.313887  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:39 kubernetes-upgrade-104742 kubelet[1690]: E0114 10:49:39.237391    1690 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.314305  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:39 kubernetes-upgrade-104742 kubelet[1703]: E0114 10:49:39.991805    1703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.314687  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:40 kubernetes-upgrade-104742 kubelet[1718]: E0114 10:49:40.733858    1718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.315065  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:41 kubernetes-upgrade-104742 kubelet[1731]: E0114 10:49:41.487551    1731 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.315553  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:42 kubernetes-upgrade-104742 kubelet[1746]: E0114 10:49:42.242554    1746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.316020  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:42 kubernetes-upgrade-104742 kubelet[1759]: E0114 10:49:42.984104    1759 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.316679  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:43 kubernetes-upgrade-104742 kubelet[1777]: E0114 10:49:43.741587    1777 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.317188  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:44 kubernetes-upgrade-104742 kubelet[1789]: E0114 10:49:44.493795    1789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.317611  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:45 kubernetes-upgrade-104742 kubelet[1804]: E0114 10:49:45.239098    1804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.318256  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:45 kubernetes-upgrade-104742 kubelet[1817]: E0114 10:49:45.985812    1817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.318765  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:46 kubernetes-upgrade-104742 kubelet[1832]: E0114 10:49:46.735145    1832 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.319114  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:47 kubernetes-upgrade-104742 kubelet[1845]: E0114 10:49:47.484839    1845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.319734  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:48 kubernetes-upgrade-104742 kubelet[1860]: E0114 10:49:48.243336    1860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.320330  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:48 kubernetes-upgrade-104742 kubelet[1872]: E0114 10:49:48.994611    1872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.321013  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:49 kubernetes-upgrade-104742 kubelet[1887]: E0114 10:49:49.745862    1887 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.321668  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:50 kubernetes-upgrade-104742 kubelet[1899]: E0114 10:49:50.489096    1899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.322487  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:51 kubernetes-upgrade-104742 kubelet[1914]: E0114 10:49:51.244812    1914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.323140  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:52 kubernetes-upgrade-104742 kubelet[1926]: E0114 10:49:52.010184    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.323837  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:52 kubernetes-upgrade-104742 kubelet[1939]: E0114 10:49:52.739635    1939 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.324388  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:53 kubernetes-upgrade-104742 kubelet[1952]: E0114 10:49:53.493974    1952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.324923  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1967]: E0114 10:49:54.242690    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.325516  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1980]: E0114 10:49:54.996287    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.326104  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:55 kubernetes-upgrade-104742 kubelet[1994]: E0114 10:49:55.740702    1994 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.326590  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:56 kubernetes-upgrade-104742 kubelet[2007]: E0114 10:49:56.486814    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.327140  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2022]: E0114 10:49:57.240649    2022 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.327775  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2035]: E0114 10:49:57.991300    2035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.328369  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:58 kubernetes-upgrade-104742 kubelet[2050]: E0114 10:49:58.734287    2050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.328913  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:59 kubernetes-upgrade-104742 kubelet[2063]: E0114 10:49:59.493593    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.329424  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2078]: E0114 10:50:00.236837    2078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.329805  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2091]: E0114 10:50:00.991497    2091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.330274  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:01 kubernetes-upgrade-104742 kubelet[2104]: E0114 10:50:01.737911    2104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.330944  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:02 kubernetes-upgrade-104742 kubelet[2118]: E0114 10:50:02.487348    2118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.331556  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2132]: E0114 10:50:03.242832    2132 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.332253  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2144]: E0114 10:50:03.988666    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.332837  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:04 kubernetes-upgrade-104742 kubelet[2159]: E0114 10:50:04.740998    2159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.333300  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:05 kubernetes-upgrade-104742 kubelet[2172]: E0114 10:50:05.487381    2172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.333649  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2187]: E0114 10:50:06.240795    2187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.334020  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2200]: E0114 10:50:06.987744    2200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.334370  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:07 kubernetes-upgrade-104742 kubelet[2217]: E0114 10:50:07.738427    2217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.334947  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:08 kubernetes-upgrade-104742 kubelet[2230]: E0114 10:50:08.498330    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.335539  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2243]: E0114 10:50:09.245506    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.336121  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.336512  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.336939  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.337525  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.338129  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:13.338333  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:50:13.338363  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:50:13.359490  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:50:13.359539  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:50:13.418249  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:50:13.418268  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:50:13.418279  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:50:13.456434  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:50:13.456481  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:50:13.483119  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:13.483142  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:50:13.483245  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:50:13.483257  189998 out.go:239]   Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.483261  189998 out.go:239]   Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.483269  189998 out.go:239]   Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.483278  189998 out.go:239]   Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:13.483283  189998 out.go:239]   Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:13.483291  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:13.483297  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:50:23.484865  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:23.531554  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:50:23.531620  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:50:23.555368  189998 cri.go:87] found id: ""
	I0114 10:50:23.555393  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.555402  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:50:23.555410  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:50:23.555458  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:50:23.579084  189998 cri.go:87] found id: ""
	I0114 10:50:23.579110  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.579118  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:50:23.579124  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:50:23.579172  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:50:23.602398  189998 cri.go:87] found id: ""
	I0114 10:50:23.602424  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.602432  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:50:23.602440  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:50:23.602620  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:50:23.628579  189998 cri.go:87] found id: ""
	I0114 10:50:23.628607  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.628617  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:50:23.628625  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:50:23.628678  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:50:23.658805  189998 cri.go:87] found id: ""
	I0114 10:50:23.658828  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.658834  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:50:23.658839  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:50:23.658882  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:50:23.686581  189998 cri.go:87] found id: ""
	I0114 10:50:23.686606  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.686614  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:50:23.686624  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:50:23.686675  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:50:23.711721  189998 cri.go:87] found id: ""
	I0114 10:50:23.711746  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.711755  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:50:23.711764  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:50:23.711823  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:50:23.736831  189998 cri.go:87] found id: ""
	I0114 10:50:23.736859  189998 logs.go:274] 0 containers: []
	W0114 10:50:23.736869  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:50:23.736882  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:50:23.736896  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:50:23.775863  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:50:23.775896  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:50:23.801683  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:50:23.801715  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:50:23.817351  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:33 kubernetes-upgrade-104742 kubelet[1592]: E0114 10:49:33.990870    1592 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.817754  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:34 kubernetes-upgrade-104742 kubelet[1607]: E0114 10:49:34.740075    1607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.818130  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:35 kubernetes-upgrade-104742 kubelet[1620]: E0114 10:49:35.484369    1620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.818505  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:36 kubernetes-upgrade-104742 kubelet[1635]: E0114 10:49:36.237068    1635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.818882  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:36 kubernetes-upgrade-104742 kubelet[1647]: E0114 10:49:36.985970    1647 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.819257  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:37 kubernetes-upgrade-104742 kubelet[1662]: E0114 10:49:37.738647    1662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.819628  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:38 kubernetes-upgrade-104742 kubelet[1675]: E0114 10:49:38.494999    1675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.820050  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:39 kubernetes-upgrade-104742 kubelet[1690]: E0114 10:49:39.237391    1690 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.820426  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:39 kubernetes-upgrade-104742 kubelet[1703]: E0114 10:49:39.991805    1703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.820805  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:40 kubernetes-upgrade-104742 kubelet[1718]: E0114 10:49:40.733858    1718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.821182  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:41 kubernetes-upgrade-104742 kubelet[1731]: E0114 10:49:41.487551    1731 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.821558  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:42 kubernetes-upgrade-104742 kubelet[1746]: E0114 10:49:42.242554    1746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.821937  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:42 kubernetes-upgrade-104742 kubelet[1759]: E0114 10:49:42.984104    1759 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.822315  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:43 kubernetes-upgrade-104742 kubelet[1777]: E0114 10:49:43.741587    1777 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.822689  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:44 kubernetes-upgrade-104742 kubelet[1789]: E0114 10:49:44.493795    1789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.823058  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:45 kubernetes-upgrade-104742 kubelet[1804]: E0114 10:49:45.239098    1804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.823432  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:45 kubernetes-upgrade-104742 kubelet[1817]: E0114 10:49:45.985812    1817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.823920  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:46 kubernetes-upgrade-104742 kubelet[1832]: E0114 10:49:46.735145    1832 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.824386  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:47 kubernetes-upgrade-104742 kubelet[1845]: E0114 10:49:47.484839    1845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.824908  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:48 kubernetes-upgrade-104742 kubelet[1860]: E0114 10:49:48.243336    1860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.825503  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:48 kubernetes-upgrade-104742 kubelet[1872]: E0114 10:49:48.994611    1872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.826070  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:49 kubernetes-upgrade-104742 kubelet[1887]: E0114 10:49:49.745862    1887 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.826632  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:50 kubernetes-upgrade-104742 kubelet[1899]: E0114 10:49:50.489096    1899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.827118  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:51 kubernetes-upgrade-104742 kubelet[1914]: E0114 10:49:51.244812    1914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.827483  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:52 kubernetes-upgrade-104742 kubelet[1926]: E0114 10:49:52.010184    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.827861  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:52 kubernetes-upgrade-104742 kubelet[1939]: E0114 10:49:52.739635    1939 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.828213  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:53 kubernetes-upgrade-104742 kubelet[1952]: E0114 10:49:53.493974    1952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.828623  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1967]: E0114 10:49:54.242690    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.829093  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1980]: E0114 10:49:54.996287    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.829445  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:55 kubernetes-upgrade-104742 kubelet[1994]: E0114 10:49:55.740702    1994 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.829804  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:56 kubernetes-upgrade-104742 kubelet[2007]: E0114 10:49:56.486814    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.830151  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2022]: E0114 10:49:57.240649    2022 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.830527  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2035]: E0114 10:49:57.991300    2035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.830881  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:58 kubernetes-upgrade-104742 kubelet[2050]: E0114 10:49:58.734287    2050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.831234  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:59 kubernetes-upgrade-104742 kubelet[2063]: E0114 10:49:59.493593    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.831620  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2078]: E0114 10:50:00.236837    2078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.832008  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2091]: E0114 10:50:00.991497    2091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.832360  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:01 kubernetes-upgrade-104742 kubelet[2104]: E0114 10:50:01.737911    2104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.832747  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:02 kubernetes-upgrade-104742 kubelet[2118]: E0114 10:50:02.487348    2118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.833097  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2132]: E0114 10:50:03.242832    2132 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.833443  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2144]: E0114 10:50:03.988666    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.833881  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:04 kubernetes-upgrade-104742 kubelet[2159]: E0114 10:50:04.740998    2159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.834330  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:05 kubernetes-upgrade-104742 kubelet[2172]: E0114 10:50:05.487381    2172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.834680  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2187]: E0114 10:50:06.240795    2187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.835028  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2200]: E0114 10:50:06.987744    2200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.835381  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:07 kubernetes-upgrade-104742 kubelet[2217]: E0114 10:50:07.738427    2217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.835780  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:08 kubernetes-upgrade-104742 kubelet[2230]: E0114 10:50:08.498330    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.836139  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2243]: E0114 10:50:09.245506    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.836491  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.836885  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.837395  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.837743  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.838103  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.838464  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:13 kubernetes-upgrade-104742 kubelet[2453]: E0114 10:50:13.740788    2453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.838824  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:14 kubernetes-upgrade-104742 kubelet[2464]: E0114 10:50:14.488527    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.839177  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2475]: E0114 10:50:15.234562    2475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.839546  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2486]: E0114 10:50:15.986220    2486 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.840000  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:16 kubernetes-upgrade-104742 kubelet[2497]: E0114 10:50:16.754719    2497 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.840561  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:17 kubernetes-upgrade-104742 kubelet[2508]: E0114 10:50:17.504027    2508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.840938  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2518]: E0114 10:50:18.251282    2518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.841287  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2527]: E0114 10:50:18.990707    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.841632  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:19 kubernetes-upgrade-104742 kubelet[2538]: E0114 10:50:19.749022    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.841995  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.842370  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.842724  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.843069  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.843434  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:23.843551  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:50:23.843565  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:50:23.858667  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:50:23.858698  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:50:23.917311  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:50:23.917342  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:23.917351  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:50:23.917469  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:50:23.917478  189998 out.go:239]   Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.917483  189998 out.go:239]   Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.917489  189998 out.go:239]   Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.917496  189998 out.go:239]   Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:23.917504  189998 out.go:239]   Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:23.917510  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:23.917522  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:50:33.917969  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:34.031901  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:50:34.031972  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:50:34.056142  189998 cri.go:87] found id: ""
	I0114 10:50:34.056168  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.056175  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:50:34.056181  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:50:34.056225  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:50:34.079055  189998 cri.go:87] found id: ""
	I0114 10:50:34.079079  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.079086  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:50:34.079093  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:50:34.079153  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:50:34.103117  189998 cri.go:87] found id: ""
	I0114 10:50:34.103143  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.103157  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:50:34.103166  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:50:34.103208  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:50:34.126929  189998 cri.go:87] found id: ""
	I0114 10:50:34.126956  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.126964  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:50:34.126983  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:50:34.127039  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:50:34.150533  189998 cri.go:87] found id: ""
	I0114 10:50:34.150561  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.150571  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:50:34.150580  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:50:34.150626  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:50:34.174848  189998 cri.go:87] found id: ""
	I0114 10:50:34.174877  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.174886  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:50:34.174894  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:50:34.174942  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:50:34.198405  189998 cri.go:87] found id: ""
	I0114 10:50:34.198429  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.198436  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:50:34.198444  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:50:34.198513  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:50:34.223009  189998 cri.go:87] found id: ""
	I0114 10:50:34.223033  189998 logs.go:274] 0 containers: []
	W0114 10:50:34.223040  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:50:34.223049  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:50:34.223079  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:50:34.279204  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:50:34.279233  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:50:34.279247  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:50:34.317126  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:50:34.317160  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:50:34.346023  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:50:34.346049  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:50:34.363559  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:44 kubernetes-upgrade-104742 kubelet[1789]: E0114 10:49:44.493795    1789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.363993  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:45 kubernetes-upgrade-104742 kubelet[1804]: E0114 10:49:45.239098    1804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.364368  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:45 kubernetes-upgrade-104742 kubelet[1817]: E0114 10:49:45.985812    1817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.364730  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:46 kubernetes-upgrade-104742 kubelet[1832]: E0114 10:49:46.735145    1832 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.365105  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:47 kubernetes-upgrade-104742 kubelet[1845]: E0114 10:49:47.484839    1845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.365474  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:48 kubernetes-upgrade-104742 kubelet[1860]: E0114 10:49:48.243336    1860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.365852  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:48 kubernetes-upgrade-104742 kubelet[1872]: E0114 10:49:48.994611    1872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.366215  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:49 kubernetes-upgrade-104742 kubelet[1887]: E0114 10:49:49.745862    1887 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.366610  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:50 kubernetes-upgrade-104742 kubelet[1899]: E0114 10:49:50.489096    1899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.367000  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:51 kubernetes-upgrade-104742 kubelet[1914]: E0114 10:49:51.244812    1914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.367365  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:52 kubernetes-upgrade-104742 kubelet[1926]: E0114 10:49:52.010184    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.367906  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:52 kubernetes-upgrade-104742 kubelet[1939]: E0114 10:49:52.739635    1939 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.368262  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:53 kubernetes-upgrade-104742 kubelet[1952]: E0114 10:49:53.493974    1952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.368627  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1967]: E0114 10:49:54.242690    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.368981  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1980]: E0114 10:49:54.996287    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.369338  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:55 kubernetes-upgrade-104742 kubelet[1994]: E0114 10:49:55.740702    1994 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.369684  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:56 kubernetes-upgrade-104742 kubelet[2007]: E0114 10:49:56.486814    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.370035  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2022]: E0114 10:49:57.240649    2022 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.370389  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2035]: E0114 10:49:57.991300    2035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.370734  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:58 kubernetes-upgrade-104742 kubelet[2050]: E0114 10:49:58.734287    2050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.371118  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:59 kubernetes-upgrade-104742 kubelet[2063]: E0114 10:49:59.493593    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.371469  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2078]: E0114 10:50:00.236837    2078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.371847  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2091]: E0114 10:50:00.991497    2091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.372244  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:01 kubernetes-upgrade-104742 kubelet[2104]: E0114 10:50:01.737911    2104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.372611  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:02 kubernetes-upgrade-104742 kubelet[2118]: E0114 10:50:02.487348    2118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.372969  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2132]: E0114 10:50:03.242832    2132 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.373317  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2144]: E0114 10:50:03.988666    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.373662  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:04 kubernetes-upgrade-104742 kubelet[2159]: E0114 10:50:04.740998    2159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.374012  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:05 kubernetes-upgrade-104742 kubelet[2172]: E0114 10:50:05.487381    2172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.374378  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2187]: E0114 10:50:06.240795    2187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.374724  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2200]: E0114 10:50:06.987744    2200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.375070  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:07 kubernetes-upgrade-104742 kubelet[2217]: E0114 10:50:07.738427    2217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.375419  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:08 kubernetes-upgrade-104742 kubelet[2230]: E0114 10:50:08.498330    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.375831  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2243]: E0114 10:50:09.245506    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.376211  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.376562  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.376908  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.377312  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.377661  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.378008  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:13 kubernetes-upgrade-104742 kubelet[2453]: E0114 10:50:13.740788    2453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.378356  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:14 kubernetes-upgrade-104742 kubelet[2464]: E0114 10:50:14.488527    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.378721  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2475]: E0114 10:50:15.234562    2475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.379091  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2486]: E0114 10:50:15.986220    2486 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.379448  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:16 kubernetes-upgrade-104742 kubelet[2497]: E0114 10:50:16.754719    2497 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.379820  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:17 kubernetes-upgrade-104742 kubelet[2508]: E0114 10:50:17.504027    2508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.380164  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2518]: E0114 10:50:18.251282    2518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.380529  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2527]: E0114 10:50:18.990707    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.380877  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:19 kubernetes-upgrade-104742 kubelet[2538]: E0114 10:50:19.749022    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.381223  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.381612  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.381959  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.382318  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.382667  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.383019  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2735]: E0114 10:50:24.258317    2735 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.383368  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2744]: E0114 10:50:24.994395    2744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.383741  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:25 kubernetes-upgrade-104742 kubelet[2754]: E0114 10:50:25.740703    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.384086  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:26 kubernetes-upgrade-104742 kubelet[2764]: E0114 10:50:26.497306    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.384437  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2774]: E0114 10:50:27.246836    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.384793  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2784]: E0114 10:50:27.997676    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.385139  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:28 kubernetes-upgrade-104742 kubelet[2794]: E0114 10:50:28.739391    2794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.385507  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:29 kubernetes-upgrade-104742 kubelet[2805]: E0114 10:50:29.484419    2805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.385905  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2816]: E0114 10:50:30.247376    2816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.386255  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.386644  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.386994  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.387356  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.387747  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:34.387866  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:50:34.387884  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:50:34.402952  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:34.402973  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:50:34.403069  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:50:34.403080  189998 out.go:239]   Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.403096  189998 out.go:239]   Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.403104  189998 out.go:239]   Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.403111  189998 out.go:239]   Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:34.403117  189998 out.go:239]   Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:34.403123  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:34.403129  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:50:44.404747  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:44.531773  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:50:44.531850  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:50:44.566054  189998 cri.go:87] found id: ""
	I0114 10:50:44.566095  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.566103  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:50:44.566112  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:50:44.566159  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:50:44.597743  189998 cri.go:87] found id: ""
	I0114 10:50:44.597770  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.597778  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:50:44.597786  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:50:44.597838  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:50:44.621133  189998 cri.go:87] found id: ""
	I0114 10:50:44.621159  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.621166  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:50:44.621174  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:50:44.621227  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:50:44.648988  189998 cri.go:87] found id: ""
	I0114 10:50:44.649015  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.649032  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:50:44.649040  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:50:44.649091  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:50:44.676952  189998 cri.go:87] found id: ""
	I0114 10:50:44.676977  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.676984  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:50:44.676990  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:50:44.677041  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:50:44.702959  189998 cri.go:87] found id: ""
	I0114 10:50:44.702987  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.702996  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:50:44.703004  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:50:44.703046  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:50:44.729531  189998 cri.go:87] found id: ""
	I0114 10:50:44.729558  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.729568  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:50:44.729581  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:50:44.729635  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:50:44.756709  189998 cri.go:87] found id: ""
	I0114 10:50:44.756731  189998 logs.go:274] 0 containers: []
	W0114 10:50:44.756737  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:50:44.756747  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:50:44.756766  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:50:44.794957  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:50:44.794987  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:50:44.820037  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:50:44.820072  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:50:44.843034  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:54 kubernetes-upgrade-104742 kubelet[1980]: E0114 10:49:54.996287    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.843781  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:55 kubernetes-upgrade-104742 kubelet[1994]: E0114 10:49:55.740702    1994 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.844394  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:56 kubernetes-upgrade-104742 kubelet[2007]: E0114 10:49:56.486814    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.844983  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2022]: E0114 10:49:57.240649    2022 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.845556  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:57 kubernetes-upgrade-104742 kubelet[2035]: E0114 10:49:57.991300    2035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.846181  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:58 kubernetes-upgrade-104742 kubelet[2050]: E0114 10:49:58.734287    2050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.846745  189998 logs.go:138] Found kubelet problem: Jan 14 10:49:59 kubernetes-upgrade-104742 kubelet[2063]: E0114 10:49:59.493593    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.847332  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2078]: E0114 10:50:00.236837    2078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.847992  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:00 kubernetes-upgrade-104742 kubelet[2091]: E0114 10:50:00.991497    2091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.848635  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:01 kubernetes-upgrade-104742 kubelet[2104]: E0114 10:50:01.737911    2104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.849283  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:02 kubernetes-upgrade-104742 kubelet[2118]: E0114 10:50:02.487348    2118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.849933  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2132]: E0114 10:50:03.242832    2132 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.850557  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:03 kubernetes-upgrade-104742 kubelet[2144]: E0114 10:50:03.988666    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.851120  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:04 kubernetes-upgrade-104742 kubelet[2159]: E0114 10:50:04.740998    2159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.851495  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:05 kubernetes-upgrade-104742 kubelet[2172]: E0114 10:50:05.487381    2172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.852022  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2187]: E0114 10:50:06.240795    2187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.852549  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2200]: E0114 10:50:06.987744    2200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.853189  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:07 kubernetes-upgrade-104742 kubelet[2217]: E0114 10:50:07.738427    2217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.853785  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:08 kubernetes-upgrade-104742 kubelet[2230]: E0114 10:50:08.498330    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.854398  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2243]: E0114 10:50:09.245506    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.855015  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.855468  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.855988  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.856530  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.857067  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.857457  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:13 kubernetes-upgrade-104742 kubelet[2453]: E0114 10:50:13.740788    2453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.857837  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:14 kubernetes-upgrade-104742 kubelet[2464]: E0114 10:50:14.488527    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.858205  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2475]: E0114 10:50:15.234562    2475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.858563  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2486]: E0114 10:50:15.986220    2486 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.858971  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:16 kubernetes-upgrade-104742 kubelet[2497]: E0114 10:50:16.754719    2497 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.859369  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:17 kubernetes-upgrade-104742 kubelet[2508]: E0114 10:50:17.504027    2508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.859778  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2518]: E0114 10:50:18.251282    2518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.860191  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2527]: E0114 10:50:18.990707    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.860573  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:19 kubernetes-upgrade-104742 kubelet[2538]: E0114 10:50:19.749022    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.860951  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.861348  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.861755  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.862325  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.862904  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.863336  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2735]: E0114 10:50:24.258317    2735 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.863782  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2744]: E0114 10:50:24.994395    2744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.864174  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:25 kubernetes-upgrade-104742 kubelet[2754]: E0114 10:50:25.740703    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.864527  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:26 kubernetes-upgrade-104742 kubelet[2764]: E0114 10:50:26.497306    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.864894  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2774]: E0114 10:50:27.246836    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.865262  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2784]: E0114 10:50:27.997676    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.865615  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:28 kubernetes-upgrade-104742 kubelet[2794]: E0114 10:50:28.739391    2794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.865974  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:29 kubernetes-upgrade-104742 kubelet[2805]: E0114 10:50:29.484419    2805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.866322  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2816]: E0114 10:50:30.247376    2816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.866670  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.867019  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.867383  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.867799  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.868149  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.868499  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:34 kubernetes-upgrade-104742 kubelet[3017]: E0114 10:50:34.733613    3017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.868861  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:35 kubernetes-upgrade-104742 kubelet[3028]: E0114 10:50:35.483768    3028 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.869221  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3039]: E0114 10:50:36.236030    3039 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.869577  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3050]: E0114 10:50:36.986977    3050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.869920  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:37 kubernetes-upgrade-104742 kubelet[3061]: E0114 10:50:37.735508    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.870279  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:38 kubernetes-upgrade-104742 kubelet[3072]: E0114 10:50:38.484405    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.870630  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3083]: E0114 10:50:39.235634    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.870974  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3094]: E0114 10:50:39.984040    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.871329  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:40 kubernetes-upgrade-104742 kubelet[3104]: E0114 10:50:40.734091    3104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.871695  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.872049  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.872405  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.872754  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.873104  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:44.873226  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:50:44.873243  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:50:44.889014  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:50:44.889045  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:50:44.947608  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:50:44.947639  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:44.947654  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:50:44.947828  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:50:44.947841  189998 out.go:239]   Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.947848  189998 out.go:239]   Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.947855  189998 out.go:239]   Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.947862  189998 out.go:239]   Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:44.947869  189998 out.go:239]   Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:44.947876  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:44.947885  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:50:54.949412  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:50:55.031320  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:50:55.031404  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:50:55.061703  189998 cri.go:87] found id: ""
	I0114 10:50:55.061732  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.061740  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:50:55.061746  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:50:55.061796  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:50:55.085375  189998 cri.go:87] found id: ""
	I0114 10:50:55.085398  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.085406  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:50:55.085414  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:50:55.085465  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:50:55.108967  189998 cri.go:87] found id: ""
	I0114 10:50:55.108993  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.109002  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:50:55.109010  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:50:55.109066  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:50:55.132035  189998 cri.go:87] found id: ""
	I0114 10:50:55.132061  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.132069  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:50:55.132076  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:50:55.132123  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:50:55.154600  189998 cri.go:87] found id: ""
	I0114 10:50:55.154625  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.154638  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:50:55.154645  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:50:55.154694  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:50:55.178702  189998 cri.go:87] found id: ""
	I0114 10:50:55.178722  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.178729  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:50:55.178735  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:50:55.178774  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:50:55.201458  189998 cri.go:87] found id: ""
	I0114 10:50:55.201477  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.201483  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:50:55.201489  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:50:55.201529  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:50:55.224436  189998 cri.go:87] found id: ""
	I0114 10:50:55.224457  189998 logs.go:274] 0 containers: []
	W0114 10:50:55.224463  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:50:55.224471  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:50:55.224481  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:50:55.242514  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:05 kubernetes-upgrade-104742 kubelet[2172]: E0114 10:50:05.487381    2172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.243109  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2187]: E0114 10:50:06.240795    2187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.243687  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:06 kubernetes-upgrade-104742 kubelet[2200]: E0114 10:50:06.987744    2200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.244124  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:07 kubernetes-upgrade-104742 kubelet[2217]: E0114 10:50:07.738427    2217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.244488  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:08 kubernetes-upgrade-104742 kubelet[2230]: E0114 10:50:08.498330    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.244838  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2243]: E0114 10:50:09.245506    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.245204  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:09 kubernetes-upgrade-104742 kubelet[2256]: E0114 10:50:09.999265    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.245562  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:10 kubernetes-upgrade-104742 kubelet[2272]: E0114 10:50:10.746566    2272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.245927  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:11 kubernetes-upgrade-104742 kubelet[2284]: E0114 10:50:11.491410    2284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.246276  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2297]: E0114 10:50:12.243103    2297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.246633  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:12 kubernetes-upgrade-104742 kubelet[2309]: E0114 10:50:12.985409    2309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.246992  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:13 kubernetes-upgrade-104742 kubelet[2453]: E0114 10:50:13.740788    2453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.247344  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:14 kubernetes-upgrade-104742 kubelet[2464]: E0114 10:50:14.488527    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.247721  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2475]: E0114 10:50:15.234562    2475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.248066  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2486]: E0114 10:50:15.986220    2486 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.248415  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:16 kubernetes-upgrade-104742 kubelet[2497]: E0114 10:50:16.754719    2497 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.248761  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:17 kubernetes-upgrade-104742 kubelet[2508]: E0114 10:50:17.504027    2508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.249132  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2518]: E0114 10:50:18.251282    2518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.249481  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2527]: E0114 10:50:18.990707    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.249831  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:19 kubernetes-upgrade-104742 kubelet[2538]: E0114 10:50:19.749022    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.250173  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.250534  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.250881  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.251225  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.251576  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.251940  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2735]: E0114 10:50:24.258317    2735 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.252289  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2744]: E0114 10:50:24.994395    2744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.252634  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:25 kubernetes-upgrade-104742 kubelet[2754]: E0114 10:50:25.740703    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.252990  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:26 kubernetes-upgrade-104742 kubelet[2764]: E0114 10:50:26.497306    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.253352  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2774]: E0114 10:50:27.246836    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.253709  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2784]: E0114 10:50:27.997676    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.254055  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:28 kubernetes-upgrade-104742 kubelet[2794]: E0114 10:50:28.739391    2794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.254547  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:29 kubernetes-upgrade-104742 kubelet[2805]: E0114 10:50:29.484419    2805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.254974  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2816]: E0114 10:50:30.247376    2816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.255332  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.255727  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.256107  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.256461  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.256810  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.257165  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:34 kubernetes-upgrade-104742 kubelet[3017]: E0114 10:50:34.733613    3017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.257515  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:35 kubernetes-upgrade-104742 kubelet[3028]: E0114 10:50:35.483768    3028 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.257865  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3039]: E0114 10:50:36.236030    3039 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.258220  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3050]: E0114 10:50:36.986977    3050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.258569  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:37 kubernetes-upgrade-104742 kubelet[3061]: E0114 10:50:37.735508    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.258919  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:38 kubernetes-upgrade-104742 kubelet[3072]: E0114 10:50:38.484405    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.259270  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3083]: E0114 10:50:39.235634    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.259626  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3094]: E0114 10:50:39.984040    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.259993  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:40 kubernetes-upgrade-104742 kubelet[3104]: E0114 10:50:40.734091    3104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.260352  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.260699  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.261052  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.261405  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.261751  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.262101  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3300]: E0114 10:50:45.234258    3300 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.262453  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3311]: E0114 10:50:45.984415    3311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.262799  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:46 kubernetes-upgrade-104742 kubelet[3322]: E0114 10:50:46.735277    3322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.263149  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:47 kubernetes-upgrade-104742 kubelet[3334]: E0114 10:50:47.496867    3334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.263495  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3344]: E0114 10:50:48.257445    3344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.263861  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3355]: E0114 10:50:48.985017    3355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.264222  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:49 kubernetes-upgrade-104742 kubelet[3366]: E0114 10:50:49.735456    3366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.264587  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:50 kubernetes-upgrade-104742 kubelet[3377]: E0114 10:50:50.483422    3377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.264946  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3387]: E0114 10:50:51.233534    3387 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.265294  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.265667  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.266021  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.266380  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.266735  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:55.266859  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:50:55.266877  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:50:55.281711  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:50:55.281738  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:50:55.335168  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:50:55.335199  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:50:55.335217  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:50:55.370674  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:50:55.370706  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:50:55.395643  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:55.395665  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:50:55.395806  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:50:55.395820  189998 out.go:239]   Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.395829  189998 out.go:239]   Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.395836  189998 out.go:239]   Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.395843  189998 out.go:239]   Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:50:55.395849  189998 out.go:239]   Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:50:55.395856  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:55.395862  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:51:05.396981  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:51:05.531865  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:51:05.531961  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:51:05.560243  189998 cri.go:87] found id: ""
	I0114 10:51:05.560271  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.560281  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:51:05.560291  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:51:05.560351  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:51:05.593539  189998 cri.go:87] found id: ""
	I0114 10:51:05.593565  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.593574  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:51:05.593582  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:51:05.593637  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:51:05.617591  189998 cri.go:87] found id: ""
	I0114 10:51:05.617613  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.617619  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:51:05.617624  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:51:05.617675  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:51:05.645663  189998 cri.go:87] found id: ""
	I0114 10:51:05.645682  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.645689  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:51:05.645695  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:51:05.645740  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:51:05.676401  189998 cri.go:87] found id: ""
	I0114 10:51:05.676428  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.676437  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:51:05.676445  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:51:05.676496  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:51:05.703574  189998 cri.go:87] found id: ""
	I0114 10:51:05.703602  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.703611  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:51:05.703619  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:51:05.703696  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:51:05.730290  189998 cri.go:87] found id: ""
	I0114 10:51:05.730308  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.730315  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:51:05.730321  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:51:05.730360  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:51:05.767051  189998 cri.go:87] found id: ""
	I0114 10:51:05.767077  189998 logs.go:274] 0 containers: []
	W0114 10:51:05.767086  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:51:05.767098  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:51:05.767114  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:51:05.840846  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:51:05.840879  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:51:05.840902  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:51:05.886462  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:51:05.886501  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:51:05.942791  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:51:05.942823  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:51:05.968840  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:15 kubernetes-upgrade-104742 kubelet[2486]: E0114 10:50:15.986220    2486 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.969559  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:16 kubernetes-upgrade-104742 kubelet[2497]: E0114 10:50:16.754719    2497 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.970255  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:17 kubernetes-upgrade-104742 kubelet[2508]: E0114 10:50:17.504027    2508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.970924  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2518]: E0114 10:50:18.251282    2518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.971599  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:18 kubernetes-upgrade-104742 kubelet[2527]: E0114 10:50:18.990707    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.972284  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:19 kubernetes-upgrade-104742 kubelet[2538]: E0114 10:50:19.749022    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.972920  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:20 kubernetes-upgrade-104742 kubelet[2549]: E0114 10:50:20.492907    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.973534  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2559]: E0114 10:50:21.244060    2559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.974174  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:21 kubernetes-upgrade-104742 kubelet[2569]: E0114 10:50:21.993880    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.974813  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:22 kubernetes-upgrade-104742 kubelet[2580]: E0114 10:50:22.746679    2580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.976647  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:23 kubernetes-upgrade-104742 kubelet[2588]: E0114 10:50:23.482987    2588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.977313  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2735]: E0114 10:50:24.258317    2735 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.977982  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:24 kubernetes-upgrade-104742 kubelet[2744]: E0114 10:50:24.994395    2744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.978636  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:25 kubernetes-upgrade-104742 kubelet[2754]: E0114 10:50:25.740703    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.979292  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:26 kubernetes-upgrade-104742 kubelet[2764]: E0114 10:50:26.497306    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.979955  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2774]: E0114 10:50:27.246836    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.980598  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2784]: E0114 10:50:27.997676    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.981202  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:28 kubernetes-upgrade-104742 kubelet[2794]: E0114 10:50:28.739391    2794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.981829  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:29 kubernetes-upgrade-104742 kubelet[2805]: E0114 10:50:29.484419    2805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.982444  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2816]: E0114 10:50:30.247376    2816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.983036  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.983663  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.984318  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.984950  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.985582  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.986095  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:34 kubernetes-upgrade-104742 kubelet[3017]: E0114 10:50:34.733613    3017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.986535  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:35 kubernetes-upgrade-104742 kubelet[3028]: E0114 10:50:35.483768    3028 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.986973  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3039]: E0114 10:50:36.236030    3039 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.987433  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3050]: E0114 10:50:36.986977    3050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.990183  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:37 kubernetes-upgrade-104742 kubelet[3061]: E0114 10:50:37.735508    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.990816  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:38 kubernetes-upgrade-104742 kubelet[3072]: E0114 10:50:38.484405    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.991433  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3083]: E0114 10:50:39.235634    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.992090  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3094]: E0114 10:50:39.984040    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.992727  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:40 kubernetes-upgrade-104742 kubelet[3104]: E0114 10:50:40.734091    3104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.993354  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.993990  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.994620  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.995302  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.995870  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.996458  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3300]: E0114 10:50:45.234258    3300 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.997065  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3311]: E0114 10:50:45.984415    3311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.997687  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:46 kubernetes-upgrade-104742 kubelet[3322]: E0114 10:50:46.735277    3322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.998308  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:47 kubernetes-upgrade-104742 kubelet[3334]: E0114 10:50:47.496867    3334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.998787  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3344]: E0114 10:50:48.257445    3344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.999159  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3355]: E0114 10:50:48.985017    3355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.999507  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:49 kubernetes-upgrade-104742 kubelet[3366]: E0114 10:50:49.735456    3366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:05.999885  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:50 kubernetes-upgrade-104742 kubelet[3377]: E0114 10:50:50.483422    3377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.000255  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3387]: E0114 10:50:51.233534    3387 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.000615  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.000970  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.001315  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.001658  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.002006  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.002351  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:55 kubernetes-upgrade-104742 kubelet[3594]: E0114 10:50:55.733273    3594 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.002699  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:56 kubernetes-upgrade-104742 kubelet[3605]: E0114 10:50:56.484817    3605 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.003051  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3616]: E0114 10:50:57.232900    3616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.003403  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3627]: E0114 10:50:57.984575    3627 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.003784  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:58 kubernetes-upgrade-104742 kubelet[3639]: E0114 10:50:58.733663    3639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.004135  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:59 kubernetes-upgrade-104742 kubelet[3650]: E0114 10:50:59.483499    3650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.004483  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3661]: E0114 10:51:00.235866    3661 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.004835  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3672]: E0114 10:51:00.985751    3672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.005180  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:01 kubernetes-upgrade-104742 kubelet[3682]: E0114 10:51:01.733948    3682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.005538  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.005900  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.006257  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.006608  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.006957  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:06.007076  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:51:06.007105  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:51:06.053447  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:06.053471  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:51:06.053568  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:51:06.053577  189998 out.go:239]   Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.053592  189998 out.go:239]   Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.053597  189998 out.go:239]   Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.053602  189998 out.go:239]   Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:06.053606  189998 out.go:239]   Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:06.053610  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:06.053614  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:51:16.055228  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:51:16.531272  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:51:16.531362  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:51:16.555620  189998 cri.go:87] found id: ""
	I0114 10:51:16.555640  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.555647  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:51:16.555653  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:51:16.555739  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:51:16.578536  189998 cri.go:87] found id: ""
	I0114 10:51:16.578557  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.578564  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:51:16.578570  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:51:16.578612  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:51:16.602574  189998 cri.go:87] found id: ""
	I0114 10:51:16.602606  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.602618  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:51:16.602625  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:51:16.602671  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:51:16.627136  189998 cri.go:87] found id: ""
	I0114 10:51:16.627162  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.627169  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:51:16.627178  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:51:16.627225  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:51:16.650034  189998 cri.go:87] found id: ""
	I0114 10:51:16.650070  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.650077  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:51:16.650083  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:51:16.650126  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:51:16.672307  189998 cri.go:87] found id: ""
	I0114 10:51:16.672335  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.672342  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:51:16.672348  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:51:16.672410  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:51:16.696897  189998 cri.go:87] found id: ""
	I0114 10:51:16.696918  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.696925  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:51:16.696931  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:51:16.696984  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:51:16.726067  189998 cri.go:87] found id: ""
	I0114 10:51:16.726088  189998 logs.go:274] 0 containers: []
	W0114 10:51:16.726095  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:51:16.726103  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:51:16.726113  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:51:16.755149  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:51:16.755176  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:51:16.772812  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2774]: E0114 10:50:27.246836    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.773177  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:27 kubernetes-upgrade-104742 kubelet[2784]: E0114 10:50:27.997676    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.773528  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:28 kubernetes-upgrade-104742 kubelet[2794]: E0114 10:50:28.739391    2794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.773876  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:29 kubernetes-upgrade-104742 kubelet[2805]: E0114 10:50:29.484419    2805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.774234  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2816]: E0114 10:50:30.247376    2816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.774585  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:30 kubernetes-upgrade-104742 kubelet[2826]: E0114 10:50:30.993069    2826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.774938  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:31 kubernetes-upgrade-104742 kubelet[2837]: E0114 10:50:31.737633    2837 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.775299  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:32 kubernetes-upgrade-104742 kubelet[2848]: E0114 10:50:32.485210    2848 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.775658  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2859]: E0114 10:50:33.233639    2859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.776030  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:33 kubernetes-upgrade-104742 kubelet[2872]: E0114 10:50:33.982707    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.776388  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:34 kubernetes-upgrade-104742 kubelet[3017]: E0114 10:50:34.733613    3017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.776737  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:35 kubernetes-upgrade-104742 kubelet[3028]: E0114 10:50:35.483768    3028 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.777090  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3039]: E0114 10:50:36.236030    3039 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.777444  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:36 kubernetes-upgrade-104742 kubelet[3050]: E0114 10:50:36.986977    3050 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.777798  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:37 kubernetes-upgrade-104742 kubelet[3061]: E0114 10:50:37.735508    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.778167  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:38 kubernetes-upgrade-104742 kubelet[3072]: E0114 10:50:38.484405    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.778526  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3083]: E0114 10:50:39.235634    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.778875  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3094]: E0114 10:50:39.984040    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.779234  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:40 kubernetes-upgrade-104742 kubelet[3104]: E0114 10:50:40.734091    3104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.779585  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.779964  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.780326  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.780685  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.781041  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.781398  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3300]: E0114 10:50:45.234258    3300 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.781750  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3311]: E0114 10:50:45.984415    3311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.782104  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:46 kubernetes-upgrade-104742 kubelet[3322]: E0114 10:50:46.735277    3322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.782458  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:47 kubernetes-upgrade-104742 kubelet[3334]: E0114 10:50:47.496867    3334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.782809  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3344]: E0114 10:50:48.257445    3344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.783157  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3355]: E0114 10:50:48.985017    3355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.783520  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:49 kubernetes-upgrade-104742 kubelet[3366]: E0114 10:50:49.735456    3366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.783980  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:50 kubernetes-upgrade-104742 kubelet[3377]: E0114 10:50:50.483422    3377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.784565  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3387]: E0114 10:50:51.233534    3387 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.786740  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.787338  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.787743  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.788093  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.788439  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.788799  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:55 kubernetes-upgrade-104742 kubelet[3594]: E0114 10:50:55.733273    3594 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.789146  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:56 kubernetes-upgrade-104742 kubelet[3605]: E0114 10:50:56.484817    3605 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.789497  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3616]: E0114 10:50:57.232900    3616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.789851  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3627]: E0114 10:50:57.984575    3627 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.790203  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:58 kubernetes-upgrade-104742 kubelet[3639]: E0114 10:50:58.733663    3639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.790553  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:59 kubernetes-upgrade-104742 kubelet[3650]: E0114 10:50:59.483499    3650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.790907  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3661]: E0114 10:51:00.235866    3661 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.791262  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3672]: E0114 10:51:00.985751    3672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.791613  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:01 kubernetes-upgrade-104742 kubelet[3682]: E0114 10:51:01.733948    3682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.791991  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.792342  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.792696  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.793071  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.793419  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.793770  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3888]: E0114 10:51:06.237895    3888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.794129  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3899]: E0114 10:51:06.994650    3899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.794488  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:07 kubernetes-upgrade-104742 kubelet[3909]: E0114 10:51:07.741049    3909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.794847  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:08 kubernetes-upgrade-104742 kubelet[3920]: E0114 10:51:08.495454    3920 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.795196  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3930]: E0114 10:51:09.247774    3930 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.795543  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3940]: E0114 10:51:09.985310    3940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.796034  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:10 kubernetes-upgrade-104742 kubelet[3951]: E0114 10:51:10.740757    3951 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.796402  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:11 kubernetes-upgrade-104742 kubelet[3962]: E0114 10:51:11.495938    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.796779  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3970]: E0114 10:51:12.239440    3970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.797125  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3982]: E0114 10:51:12.992115    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.797482  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.797838  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.798186  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.798546  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.798899  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:16.799018  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:51:16.799035  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:51:16.814549  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:51:16.814578  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:51:16.869069  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:51:16.869092  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:51:16.869107  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:51:16.904583  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:16.904617  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:51:16.904733  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:51:16.904747  189998 out.go:239]   Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.904753  189998 out.go:239]   Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.904760  189998 out.go:239]   Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.904773  189998 out.go:239]   Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:16.904778  189998 out.go:239]   Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:16.904783  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:16.904791  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:51:26.906137  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:51:27.032062  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:51:27.032136  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:51:27.056870  189998 cri.go:87] found id: ""
	I0114 10:51:27.056894  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.056901  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:51:27.056907  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:51:27.056952  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:51:27.080061  189998 cri.go:87] found id: ""
	I0114 10:51:27.080089  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.080098  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:51:27.080105  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:51:27.080172  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:51:27.103488  189998 cri.go:87] found id: ""
	I0114 10:51:27.103513  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.103520  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:51:27.103525  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:51:27.103580  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:51:27.126106  189998 cri.go:87] found id: ""
	I0114 10:51:27.126128  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.126134  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:51:27.126139  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:51:27.126190  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:51:27.152568  189998 cri.go:87] found id: ""
	I0114 10:51:27.152588  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.152595  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:51:27.152601  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:51:27.152639  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:51:27.176362  189998 cri.go:87] found id: ""
	I0114 10:51:27.176390  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.176399  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:51:27.176411  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:51:27.176453  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:51:27.201065  189998 cri.go:87] found id: ""
	I0114 10:51:27.201184  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.201195  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:51:27.201201  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:51:27.201253  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:51:27.226755  189998 cri.go:87] found id: ""
	I0114 10:51:27.226778  189998 logs.go:274] 0 containers: []
	W0114 10:51:27.226784  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:51:27.226794  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:51:27.226806  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:51:27.263102  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:51:27.263132  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:51:27.295881  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:51:27.295924  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:51:27.316040  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:37 kubernetes-upgrade-104742 kubelet[3061]: E0114 10:50:37.735508    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.316738  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:38 kubernetes-upgrade-104742 kubelet[3072]: E0114 10:50:38.484405    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.317375  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3083]: E0114 10:50:39.235634    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.318038  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:39 kubernetes-upgrade-104742 kubelet[3094]: E0114 10:50:39.984040    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.318646  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:40 kubernetes-upgrade-104742 kubelet[3104]: E0114 10:50:40.734091    3104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.319230  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:41 kubernetes-upgrade-104742 kubelet[3115]: E0114 10:50:41.484952    3115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.319801  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3126]: E0114 10:50:42.235875    3126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.320204  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:42 kubernetes-upgrade-104742 kubelet[3137]: E0114 10:50:42.992707    3137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.320599  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:43 kubernetes-upgrade-104742 kubelet[3147]: E0114 10:50:43.779276    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.320972  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:44 kubernetes-upgrade-104742 kubelet[3159]: E0114 10:50:44.488613    3159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.321355  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3300]: E0114 10:50:45.234258    3300 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.321730  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:45 kubernetes-upgrade-104742 kubelet[3311]: E0114 10:50:45.984415    3311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.322097  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:46 kubernetes-upgrade-104742 kubelet[3322]: E0114 10:50:46.735277    3322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.322470  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:47 kubernetes-upgrade-104742 kubelet[3334]: E0114 10:50:47.496867    3334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.322844  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3344]: E0114 10:50:48.257445    3344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.323222  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3355]: E0114 10:50:48.985017    3355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.323589  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:49 kubernetes-upgrade-104742 kubelet[3366]: E0114 10:50:49.735456    3366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.324014  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:50 kubernetes-upgrade-104742 kubelet[3377]: E0114 10:50:50.483422    3377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.324384  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3387]: E0114 10:50:51.233534    3387 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.324752  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.325111  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.325498  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.325882  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.326246  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.326606  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:55 kubernetes-upgrade-104742 kubelet[3594]: E0114 10:50:55.733273    3594 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.326971  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:56 kubernetes-upgrade-104742 kubelet[3605]: E0114 10:50:56.484817    3605 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.327336  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3616]: E0114 10:50:57.232900    3616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.327726  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3627]: E0114 10:50:57.984575    3627 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.328101  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:58 kubernetes-upgrade-104742 kubelet[3639]: E0114 10:50:58.733663    3639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.328465  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:59 kubernetes-upgrade-104742 kubelet[3650]: E0114 10:50:59.483499    3650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.328830  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3661]: E0114 10:51:00.235866    3661 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.329192  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3672]: E0114 10:51:00.985751    3672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.329558  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:01 kubernetes-upgrade-104742 kubelet[3682]: E0114 10:51:01.733948    3682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.329925  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.330406  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.330842  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.331206  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.331617  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.332027  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3888]: E0114 10:51:06.237895    3888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.332384  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3899]: E0114 10:51:06.994650    3899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.332753  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:07 kubernetes-upgrade-104742 kubelet[3909]: E0114 10:51:07.741049    3909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.333130  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:08 kubernetes-upgrade-104742 kubelet[3920]: E0114 10:51:08.495454    3920 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.333481  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3930]: E0114 10:51:09.247774    3930 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.333838  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3940]: E0114 10:51:09.985310    3940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.334188  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:10 kubernetes-upgrade-104742 kubelet[3951]: E0114 10:51:10.740757    3951 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.334541  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:11 kubernetes-upgrade-104742 kubelet[3962]: E0114 10:51:11.495938    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.334897  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3970]: E0114 10:51:12.239440    3970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.335253  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3982]: E0114 10:51:12.992115    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.335616  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.336025  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.336405  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.336948  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.337563  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.338142  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:17 kubernetes-upgrade-104742 kubelet[4187]: E0114 10:51:17.485007    4187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.338753  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4199]: E0114 10:51:18.235796    4199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.339370  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4212]: E0114 10:51:18.981589    4212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.340004  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:19 kubernetes-upgrade-104742 kubelet[4223]: E0114 10:51:19.733752    4223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.340694  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:20 kubernetes-upgrade-104742 kubelet[4234]: E0114 10:51:20.483709    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.341308  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4245]: E0114 10:51:21.233286    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.341937  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4256]: E0114 10:51:21.982898    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.342578  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:22 kubernetes-upgrade-104742 kubelet[4267]: E0114 10:51:22.734572    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.343220  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:23 kubernetes-upgrade-104742 kubelet[4279]: E0114 10:51:23.483813    4279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.343922  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.344537  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.345141  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.345691  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.346082  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:27.346207  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:51:27.346226  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:51:27.368948  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:51:27.368987  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:51:27.431862  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:51:27.431894  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:27.431907  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:51:27.432032  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:51:27.432050  189998 out.go:239]   Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.432062  189998 out.go:239]   Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.432072  189998 out.go:239]   Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.432082  189998 out.go:239]   Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:27.432092  189998 out.go:239]   Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:27.432101  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:27.432113  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:51:37.432742  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:51:37.531753  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:51:37.531832  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:51:37.555395  189998 cri.go:87] found id: ""
	I0114 10:51:37.555423  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.555429  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:51:37.555436  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:51:37.555476  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:51:37.578450  189998 cri.go:87] found id: ""
	I0114 10:51:37.578470  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.578477  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:51:37.578483  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:51:37.578522  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:51:37.604079  189998 cri.go:87] found id: ""
	I0114 10:51:37.604110  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.604119  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:51:37.604127  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:51:37.604182  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:51:37.627518  189998 cri.go:87] found id: ""
	I0114 10:51:37.627546  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.627555  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:51:37.627563  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:51:37.627606  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:51:37.650577  189998 cri.go:87] found id: ""
	I0114 10:51:37.650599  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.650605  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:51:37.650611  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:51:37.650666  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:51:37.675321  189998 cri.go:87] found id: ""
	I0114 10:51:37.675346  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.675353  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:51:37.675360  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:51:37.675410  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:51:37.701936  189998 cri.go:87] found id: ""
	I0114 10:51:37.701960  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.701967  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:51:37.701972  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:51:37.702015  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:51:37.728077  189998 cri.go:87] found id: ""
	I0114 10:51:37.728105  189998 logs.go:274] 0 containers: []
	W0114 10:51:37.728114  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:51:37.728131  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:51:37.728145  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:51:37.745054  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3344]: E0114 10:50:48.257445    3344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.745423  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:48 kubernetes-upgrade-104742 kubelet[3355]: E0114 10:50:48.985017    3355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.745779  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:49 kubernetes-upgrade-104742 kubelet[3366]: E0114 10:50:49.735456    3366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.746123  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:50 kubernetes-upgrade-104742 kubelet[3377]: E0114 10:50:50.483422    3377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.746472  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3387]: E0114 10:50:51.233534    3387 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.746824  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:51 kubernetes-upgrade-104742 kubelet[3399]: E0114 10:50:51.990479    3399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.747181  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:52 kubernetes-upgrade-104742 kubelet[3410]: E0114 10:50:52.733577    3410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.747552  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:53 kubernetes-upgrade-104742 kubelet[3421]: E0114 10:50:53.485855    3421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.747921  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3432]: E0114 10:50:54.235040    3432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.748268  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:54 kubernetes-upgrade-104742 kubelet[3443]: E0114 10:50:54.989468    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.748617  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:55 kubernetes-upgrade-104742 kubelet[3594]: E0114 10:50:55.733273    3594 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.748967  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:56 kubernetes-upgrade-104742 kubelet[3605]: E0114 10:50:56.484817    3605 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.749324  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3616]: E0114 10:50:57.232900    3616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.749684  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:57 kubernetes-upgrade-104742 kubelet[3627]: E0114 10:50:57.984575    3627 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.750040  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:58 kubernetes-upgrade-104742 kubelet[3639]: E0114 10:50:58.733663    3639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.750388  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:59 kubernetes-upgrade-104742 kubelet[3650]: E0114 10:50:59.483499    3650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.750732  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3661]: E0114 10:51:00.235866    3661 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.751096  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3672]: E0114 10:51:00.985751    3672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.751598  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:01 kubernetes-upgrade-104742 kubelet[3682]: E0114 10:51:01.733948    3682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.752009  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.752374  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.752725  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.753098  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.753458  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.753819  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3888]: E0114 10:51:06.237895    3888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.754163  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3899]: E0114 10:51:06.994650    3899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.754512  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:07 kubernetes-upgrade-104742 kubelet[3909]: E0114 10:51:07.741049    3909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.754858  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:08 kubernetes-upgrade-104742 kubelet[3920]: E0114 10:51:08.495454    3920 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.755206  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3930]: E0114 10:51:09.247774    3930 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.755554  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3940]: E0114 10:51:09.985310    3940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.755923  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:10 kubernetes-upgrade-104742 kubelet[3951]: E0114 10:51:10.740757    3951 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.756277  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:11 kubernetes-upgrade-104742 kubelet[3962]: E0114 10:51:11.495938    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.756625  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3970]: E0114 10:51:12.239440    3970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.756972  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3982]: E0114 10:51:12.992115    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.757319  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.757665  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.758022  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.758374  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.758725  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.759082  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:17 kubernetes-upgrade-104742 kubelet[4187]: E0114 10:51:17.485007    4187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.759441  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4199]: E0114 10:51:18.235796    4199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.759830  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4212]: E0114 10:51:18.981589    4212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.760181  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:19 kubernetes-upgrade-104742 kubelet[4223]: E0114 10:51:19.733752    4223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.760528  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:20 kubernetes-upgrade-104742 kubelet[4234]: E0114 10:51:20.483709    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.760874  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4245]: E0114 10:51:21.233286    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.761225  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4256]: E0114 10:51:21.982898    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.761575  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:22 kubernetes-upgrade-104742 kubelet[4267]: E0114 10:51:22.734572    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.761921  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:23 kubernetes-upgrade-104742 kubelet[4279]: E0114 10:51:23.483813    4279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.762266  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.762617  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.762960  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.763311  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.763657  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.764074  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4478]: E0114 10:51:27.984211    4478 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.764435  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:28 kubernetes-upgrade-104742 kubelet[4489]: E0114 10:51:28.744042    4489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.764805  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:29 kubernetes-upgrade-104742 kubelet[4499]: E0114 10:51:29.497683    4499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.765150  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4508]: E0114 10:51:30.243349    4508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.765508  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4519]: E0114 10:51:30.990635    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.765869  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:31 kubernetes-upgrade-104742 kubelet[4530]: E0114 10:51:31.738173    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.766217  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:32 kubernetes-upgrade-104742 kubelet[4541]: E0114 10:51:32.491825    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.766568  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4552]: E0114 10:51:33.233384    4552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.766923  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.767288  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.767636  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.767999  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.768371  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:37.768639  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:51:37.768656  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:51:37.784750  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:51:37.784779  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:51:37.839503  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:51:37.839528  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:51:37.839542  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:51:37.875661  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:51:37.875726  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:51:37.901856  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:37.901879  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:51:37.901977  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:51:37.901990  189998 out.go:239]   Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.901994  189998 out.go:239]   Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.901999  189998 out.go:239]   Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.902006  189998 out.go:239]   Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:37.902012  189998 out.go:239]   Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:37.902019  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:37.902024  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:51:47.902276  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:51:48.031595  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:51:48.031717  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:51:48.061105  189998 cri.go:87] found id: ""
	I0114 10:51:48.061128  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.061136  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:51:48.061142  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:51:48.061195  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:51:48.088540  189998 cri.go:87] found id: ""
	I0114 10:51:48.088561  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.088568  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:51:48.088574  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:51:48.088618  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:51:48.111604  189998 cri.go:87] found id: ""
	I0114 10:51:48.111626  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.111633  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:51:48.111638  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:51:48.111792  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:51:48.137184  189998 cri.go:87] found id: ""
	I0114 10:51:48.137257  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.137272  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:51:48.137281  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:51:48.137329  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:51:48.169977  189998 cri.go:87] found id: ""
	I0114 10:51:48.170004  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.170013  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:51:48.170021  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:51:48.170072  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:51:48.197093  189998 cri.go:87] found id: ""
	I0114 10:51:48.197120  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.197130  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:51:48.197139  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:51:48.197195  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:51:48.224623  189998 cri.go:87] found id: ""
	I0114 10:51:48.224648  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.224656  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:51:48.224662  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:51:48.224704  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:51:48.250053  189998 cri.go:87] found id: ""
	I0114 10:51:48.250082  189998 logs.go:274] 0 containers: []
	W0114 10:51:48.250092  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:51:48.250104  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:51:48.250120  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:51:48.311307  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:51:48.311344  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:51:48.311357  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:51:48.362057  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:51:48.362104  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:51:48.391166  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:51:48.391199  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:51:48.413410  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:58 kubernetes-upgrade-104742 kubelet[3639]: E0114 10:50:58.733663    3639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.413778  189998 logs.go:138] Found kubelet problem: Jan 14 10:50:59 kubernetes-upgrade-104742 kubelet[3650]: E0114 10:50:59.483499    3650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.414133  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3661]: E0114 10:51:00.235866    3661 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.414491  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:00 kubernetes-upgrade-104742 kubelet[3672]: E0114 10:51:00.985751    3672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.414842  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:01 kubernetes-upgrade-104742 kubelet[3682]: E0114 10:51:01.733948    3682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.415329  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:02 kubernetes-upgrade-104742 kubelet[3693]: E0114 10:51:02.484830    3693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.415777  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3705]: E0114 10:51:03.233345    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.416297  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:03 kubernetes-upgrade-104742 kubelet[3716]: E0114 10:51:03.989937    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.416651  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:04 kubernetes-upgrade-104742 kubelet[3726]: E0114 10:51:04.738287    3726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.417004  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:05 kubernetes-upgrade-104742 kubelet[3739]: E0114 10:51:05.483245    3739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.417350  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3888]: E0114 10:51:06.237895    3888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.417695  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:06 kubernetes-upgrade-104742 kubelet[3899]: E0114 10:51:06.994650    3899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.418043  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:07 kubernetes-upgrade-104742 kubelet[3909]: E0114 10:51:07.741049    3909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.418407  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:08 kubernetes-upgrade-104742 kubelet[3920]: E0114 10:51:08.495454    3920 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.418900  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3930]: E0114 10:51:09.247774    3930 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.419269  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3940]: E0114 10:51:09.985310    3940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.419618  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:10 kubernetes-upgrade-104742 kubelet[3951]: E0114 10:51:10.740757    3951 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.420002  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:11 kubernetes-upgrade-104742 kubelet[3962]: E0114 10:51:11.495938    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.420358  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3970]: E0114 10:51:12.239440    3970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.420704  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3982]: E0114 10:51:12.992115    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.421058  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.421416  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.421770  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.422119  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.422476  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.422821  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:17 kubernetes-upgrade-104742 kubelet[4187]: E0114 10:51:17.485007    4187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.423187  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4199]: E0114 10:51:18.235796    4199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.423537  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4212]: E0114 10:51:18.981589    4212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.423918  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:19 kubernetes-upgrade-104742 kubelet[4223]: E0114 10:51:19.733752    4223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.424299  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:20 kubernetes-upgrade-104742 kubelet[4234]: E0114 10:51:20.483709    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.424649  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4245]: E0114 10:51:21.233286    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.424999  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4256]: E0114 10:51:21.982898    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.425349  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:22 kubernetes-upgrade-104742 kubelet[4267]: E0114 10:51:22.734572    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.425715  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:23 kubernetes-upgrade-104742 kubelet[4279]: E0114 10:51:23.483813    4279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.426079  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.426426  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.426770  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.427118  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.427465  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.427867  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4478]: E0114 10:51:27.984211    4478 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.428253  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:28 kubernetes-upgrade-104742 kubelet[4489]: E0114 10:51:28.744042    4489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.428603  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:29 kubernetes-upgrade-104742 kubelet[4499]: E0114 10:51:29.497683    4499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.428955  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4508]: E0114 10:51:30.243349    4508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.429353  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4519]: E0114 10:51:30.990635    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.429710  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:31 kubernetes-upgrade-104742 kubelet[4530]: E0114 10:51:31.738173    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.430062  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:32 kubernetes-upgrade-104742 kubelet[4541]: E0114 10:51:32.491825    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.430428  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4552]: E0114 10:51:33.233384    4552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.430777  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.431128  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.431494  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.432029  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.432391  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.432806  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:37 kubernetes-upgrade-104742 kubelet[4705]: E0114 10:51:37.738365    4705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.433189  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:38 kubernetes-upgrade-104742 kubelet[4768]: E0114 10:51:38.483174    4768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.433537  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4778]: E0114 10:51:39.237603    4778 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.433897  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4790]: E0114 10:51:39.986112    4790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.434248  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:40 kubernetes-upgrade-104742 kubelet[4801]: E0114 10:51:40.735011    4801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.434595  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:41 kubernetes-upgrade-104742 kubelet[4812]: E0114 10:51:41.487093    4812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.434947  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4823]: E0114 10:51:42.233579    4823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.435304  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4834]: E0114 10:51:42.986964    4834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.435648  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:43 kubernetes-upgrade-104742 kubelet[4845]: E0114 10:51:43.735979    4845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.436029  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:44 kubernetes-upgrade-104742 kubelet[4858]: E0114 10:51:44.483770    4858 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.436375  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.436724  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.437082  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.437453  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.437802  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:48.437924  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:51:48.437942  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:51:48.460816  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:48.460854  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:51:48.460999  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:51:48.461016  189998 out.go:239]   Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.461026  189998 out.go:239]   Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.461038  189998 out.go:239]   Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.461051  189998 out.go:239]   Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:48.461061  189998 out.go:239]   Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:48.461072  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:48.461084  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:51:58.462406  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:51:58.531658  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:51:58.531811  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:51:58.556345  189998 cri.go:87] found id: ""
	I0114 10:51:58.556368  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.556375  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:51:58.556382  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:51:58.556434  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:51:58.579535  189998 cri.go:87] found id: ""
	I0114 10:51:58.579565  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.579574  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:51:58.579583  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:51:58.579637  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:51:58.603132  189998 cri.go:87] found id: ""
	I0114 10:51:58.603152  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.603158  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:51:58.603164  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:51:58.603212  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:51:58.627955  189998 cri.go:87] found id: ""
	I0114 10:51:58.627987  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.627997  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:51:58.628006  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:51:58.628051  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:51:58.651787  189998 cri.go:87] found id: ""
	I0114 10:51:58.651811  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.651818  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:51:58.651823  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:51:58.651864  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:51:58.674538  189998 cri.go:87] found id: ""
	I0114 10:51:58.674565  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.674576  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:51:58.674585  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:51:58.674640  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:51:58.699298  189998 cri.go:87] found id: ""
	I0114 10:51:58.699322  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.699330  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:51:58.699336  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:51:58.699383  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:51:58.728403  189998 cri.go:87] found id: ""
	I0114 10:51:58.728430  189998 logs.go:274] 0 containers: []
	W0114 10:51:58.728439  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:51:58.728451  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:51:58.728467  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:51:58.747987  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:51:58.748021  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:51:58.803575  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:51:58.803594  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:51:58.803603  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:51:58.840148  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:51:58.840186  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:51:58.866410  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:51:58.866439  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:51:58.884647  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3930]: E0114 10:51:09.247774    3930 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.885038  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:09 kubernetes-upgrade-104742 kubelet[3940]: E0114 10:51:09.985310    3940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.885408  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:10 kubernetes-upgrade-104742 kubelet[3951]: E0114 10:51:10.740757    3951 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.885778  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:11 kubernetes-upgrade-104742 kubelet[3962]: E0114 10:51:11.495938    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.886130  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3970]: E0114 10:51:12.239440    3970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.886484  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:12 kubernetes-upgrade-104742 kubelet[3982]: E0114 10:51:12.992115    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.886837  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:13 kubernetes-upgrade-104742 kubelet[3993]: E0114 10:51:13.733327    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.887187  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:14 kubernetes-upgrade-104742 kubelet[4004]: E0114 10:51:14.482732    4004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.887541  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4014]: E0114 10:51:15.235362    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.887976  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:15 kubernetes-upgrade-104742 kubelet[4025]: E0114 10:51:15.985069    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.888348  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:16 kubernetes-upgrade-104742 kubelet[4122]: E0114 10:51:16.741347    4122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.888700  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:17 kubernetes-upgrade-104742 kubelet[4187]: E0114 10:51:17.485007    4187 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.889051  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4199]: E0114 10:51:18.235796    4199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.889421  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:18 kubernetes-upgrade-104742 kubelet[4212]: E0114 10:51:18.981589    4212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.889773  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:19 kubernetes-upgrade-104742 kubelet[4223]: E0114 10:51:19.733752    4223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.890126  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:20 kubernetes-upgrade-104742 kubelet[4234]: E0114 10:51:20.483709    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.890480  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4245]: E0114 10:51:21.233286    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.890837  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4256]: E0114 10:51:21.982898    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.891187  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:22 kubernetes-upgrade-104742 kubelet[4267]: E0114 10:51:22.734572    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.891548  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:23 kubernetes-upgrade-104742 kubelet[4279]: E0114 10:51:23.483813    4279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.891918  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.892274  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.892627  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.892990  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.893381  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.893748  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4478]: E0114 10:51:27.984211    4478 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.894098  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:28 kubernetes-upgrade-104742 kubelet[4489]: E0114 10:51:28.744042    4489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.894453  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:29 kubernetes-upgrade-104742 kubelet[4499]: E0114 10:51:29.497683    4499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.894814  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4508]: E0114 10:51:30.243349    4508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.895165  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4519]: E0114 10:51:30.990635    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.895517  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:31 kubernetes-upgrade-104742 kubelet[4530]: E0114 10:51:31.738173    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.895955  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:32 kubernetes-upgrade-104742 kubelet[4541]: E0114 10:51:32.491825    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.896311  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4552]: E0114 10:51:33.233384    4552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.896673  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.897023  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.897383  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.897769  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.898123  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.898477  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:37 kubernetes-upgrade-104742 kubelet[4705]: E0114 10:51:37.738365    4705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.898829  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:38 kubernetes-upgrade-104742 kubelet[4768]: E0114 10:51:38.483174    4768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.899189  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4778]: E0114 10:51:39.237603    4778 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.899544  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4790]: E0114 10:51:39.986112    4790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.899920  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:40 kubernetes-upgrade-104742 kubelet[4801]: E0114 10:51:40.735011    4801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.900276  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:41 kubernetes-upgrade-104742 kubelet[4812]: E0114 10:51:41.487093    4812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.900635  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4823]: E0114 10:51:42.233579    4823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.900984  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4834]: E0114 10:51:42.986964    4834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.901333  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:43 kubernetes-upgrade-104742 kubelet[4845]: E0114 10:51:43.735979    4845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.901683  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:44 kubernetes-upgrade-104742 kubelet[4858]: E0114 10:51:44.483770    4858 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.902033  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.902388  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.902743  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.903099  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.903462  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.903831  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[5057]: E0114 10:51:48.988364    5057 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.904181  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:49 kubernetes-upgrade-104742 kubelet[5068]: E0114 10:51:49.743020    5068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.904535  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:50 kubernetes-upgrade-104742 kubelet[5079]: E0114 10:51:50.484641    5079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.904882  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5090]: E0114 10:51:51.246878    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.905235  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5101]: E0114 10:51:51.988442    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.905585  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:52 kubernetes-upgrade-104742 kubelet[5112]: E0114 10:51:52.736273    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.905959  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:53 kubernetes-upgrade-104742 kubelet[5122]: E0114 10:51:53.485577    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.906310  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5133]: E0114 10:51:54.234420    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.906665  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5144]: E0114 10:51:54.985086    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.907031  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.907393  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.907774  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.908127  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.908484  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:58.908606  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:58.908618  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:51:58.908734  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:51:58.908745  189998 out.go:239]   Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.908753  189998 out.go:239]   Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.908758  189998 out.go:239]   Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.908765  189998 out.go:239]   Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:51:58.908775  189998 out.go:239]   Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:51:58.908782  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:51:58.908787  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:52:08.909991  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:52:09.031933  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:52:09.032005  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:52:09.055959  189998 cri.go:87] found id: ""
	I0114 10:52:09.055989  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.055999  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:52:09.056008  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:52:09.056062  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:52:09.079931  189998 cri.go:87] found id: ""
	I0114 10:52:09.079950  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.079959  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:52:09.079968  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:52:09.080026  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:52:09.103051  189998 cri.go:87] found id: ""
	I0114 10:52:09.103078  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.103086  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:52:09.103094  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:52:09.103139  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:52:09.126270  189998 cri.go:87] found id: ""
	I0114 10:52:09.127574  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.127584  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:52:09.127591  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:52:09.127636  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:52:09.154641  189998 cri.go:87] found id: ""
	I0114 10:52:09.154668  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.154677  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:52:09.154685  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:52:09.154733  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:52:09.178813  189998 cri.go:87] found id: ""
	I0114 10:52:09.178840  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.178850  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:52:09.178858  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:52:09.178913  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:52:09.202460  189998 cri.go:87] found id: ""
	I0114 10:52:09.202485  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.202491  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:52:09.202497  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:52:09.202548  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:52:09.225821  189998 cri.go:87] found id: ""
	I0114 10:52:09.225853  189998 logs.go:274] 0 containers: []
	W0114 10:52:09.225861  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:52:09.225870  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:52:09.225882  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:52:09.242863  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:19 kubernetes-upgrade-104742 kubelet[4223]: E0114 10:51:19.733752    4223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.243459  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:20 kubernetes-upgrade-104742 kubelet[4234]: E0114 10:51:20.483709    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.244071  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4245]: E0114 10:51:21.233286    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.244662  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:21 kubernetes-upgrade-104742 kubelet[4256]: E0114 10:51:21.982898    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.245237  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:22 kubernetes-upgrade-104742 kubelet[4267]: E0114 10:51:22.734572    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.245828  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:23 kubernetes-upgrade-104742 kubelet[4279]: E0114 10:51:23.483813    4279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.246404  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4290]: E0114 10:51:24.234893    4290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.246922  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:24 kubernetes-upgrade-104742 kubelet[4301]: E0114 10:51:24.992645    4301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.247300  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:25 kubernetes-upgrade-104742 kubelet[4312]: E0114 10:51:25.745419    4312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.247729  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:26 kubernetes-upgrade-104742 kubelet[4322]: E0114 10:51:26.491956    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.248129  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4416]: E0114 10:51:27.238154    4416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.248495  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:27 kubernetes-upgrade-104742 kubelet[4478]: E0114 10:51:27.984211    4478 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.248868  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:28 kubernetes-upgrade-104742 kubelet[4489]: E0114 10:51:28.744042    4489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.249217  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:29 kubernetes-upgrade-104742 kubelet[4499]: E0114 10:51:29.497683    4499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.249568  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4508]: E0114 10:51:30.243349    4508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.249922  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4519]: E0114 10:51:30.990635    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.250275  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:31 kubernetes-upgrade-104742 kubelet[4530]: E0114 10:51:31.738173    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.250645  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:32 kubernetes-upgrade-104742 kubelet[4541]: E0114 10:51:32.491825    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.251002  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4552]: E0114 10:51:33.233384    4552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.251353  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.251726  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.252083  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.252433  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.252787  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.253141  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:37 kubernetes-upgrade-104742 kubelet[4705]: E0114 10:51:37.738365    4705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.253506  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:38 kubernetes-upgrade-104742 kubelet[4768]: E0114 10:51:38.483174    4768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.253866  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4778]: E0114 10:51:39.237603    4778 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.254222  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4790]: E0114 10:51:39.986112    4790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.254573  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:40 kubernetes-upgrade-104742 kubelet[4801]: E0114 10:51:40.735011    4801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.254935  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:41 kubernetes-upgrade-104742 kubelet[4812]: E0114 10:51:41.487093    4812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.255283  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4823]: E0114 10:51:42.233579    4823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.255634  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4834]: E0114 10:51:42.986964    4834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.256049  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:43 kubernetes-upgrade-104742 kubelet[4845]: E0114 10:51:43.735979    4845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.256409  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:44 kubernetes-upgrade-104742 kubelet[4858]: E0114 10:51:44.483770    4858 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.256773  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.257136  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.257500  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.257851  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.258201  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.258558  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[5057]: E0114 10:51:48.988364    5057 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.258915  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:49 kubernetes-upgrade-104742 kubelet[5068]: E0114 10:51:49.743020    5068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.259269  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:50 kubernetes-upgrade-104742 kubelet[5079]: E0114 10:51:50.484641    5079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.259619  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5090]: E0114 10:51:51.246878    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.259995  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5101]: E0114 10:51:51.988442    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.260348  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:52 kubernetes-upgrade-104742 kubelet[5112]: E0114 10:51:52.736273    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.260701  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:53 kubernetes-upgrade-104742 kubelet[5122]: E0114 10:51:53.485577    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.261074  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5133]: E0114 10:51:54.234420    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.261428  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5144]: E0114 10:51:54.985086    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.261784  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.262137  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.262492  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.262849  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.263197  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.263556  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:59 kubernetes-upgrade-104742 kubelet[5348]: E0114 10:51:59.483478    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.263922  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5359]: E0114 10:52:00.233284    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.264278  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5370]: E0114 10:52:00.984473    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.264634  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:01 kubernetes-upgrade-104742 kubelet[5381]: E0114 10:52:01.734985    5381 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.264986  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:02 kubernetes-upgrade-104742 kubelet[5392]: E0114 10:52:02.483439    5392 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.265338  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5403]: E0114 10:52:03.234816    5403 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.265701  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5414]: E0114 10:52:03.983211    5414 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.266055  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:04 kubernetes-upgrade-104742 kubelet[5425]: E0114 10:52:04.733945    5425 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.266406  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:05 kubernetes-upgrade-104742 kubelet[5436]: E0114 10:52:05.482445    5436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.266771  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.267125  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.267482  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.268743  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.269211  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:09.269225  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:52:09.269250  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:52:09.287004  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:52:09.287034  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:52:09.343495  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:52:09.343516  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:52:09.343529  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:52:09.384003  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:52:09.384043  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:52:09.411868  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:09.411898  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:52:09.411996  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:52:09.412009  189998 out.go:239]   Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.412017  189998 out.go:239]   Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.412022  189998 out.go:239]   Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.412028  189998 out.go:239]   Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:09.412032  189998 out.go:239]   Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:09.412036  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:09.412041  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:52:19.413407  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:52:19.531460  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:52:19.531550  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:52:19.554989  189998 cri.go:87] found id: ""
	I0114 10:52:19.555011  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.555018  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:52:19.555024  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:52:19.555065  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:52:19.578691  189998 cri.go:87] found id: ""
	I0114 10:52:19.578719  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.578728  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:52:19.578737  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:52:19.578794  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:52:19.605801  189998 cri.go:87] found id: ""
	I0114 10:52:19.605834  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.605845  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:52:19.605853  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:52:19.605912  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:52:19.629087  189998 cri.go:87] found id: ""
	I0114 10:52:19.629109  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.629118  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:52:19.629127  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:52:19.629173  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:52:19.656910  189998 cri.go:87] found id: ""
	I0114 10:52:19.656935  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.656943  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:52:19.656951  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:52:19.657042  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:52:19.679601  189998 cri.go:87] found id: ""
	I0114 10:52:19.679649  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.679658  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:52:19.679664  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:52:19.679734  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:52:19.707131  189998 cri.go:87] found id: ""
	I0114 10:52:19.707161  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.707170  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:52:19.707179  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:52:19.707225  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:52:19.732811  189998 cri.go:87] found id: ""
	I0114 10:52:19.732837  189998 logs.go:274] 0 containers: []
	W0114 10:52:19.732847  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:52:19.732858  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:52:19.732872  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:52:19.750401  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:52:19.750440  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:52:19.805854  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:52:19.805882  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:52:19.805894  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:52:19.840387  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:52:19.840419  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:52:19.869386  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:52:19.869410  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:52:19.885449  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4508]: E0114 10:51:30.243349    4508 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.885826  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:30 kubernetes-upgrade-104742 kubelet[4519]: E0114 10:51:30.990635    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.886175  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:31 kubernetes-upgrade-104742 kubelet[4530]: E0114 10:51:31.738173    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.886533  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:32 kubernetes-upgrade-104742 kubelet[4541]: E0114 10:51:32.491825    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.886902  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4552]: E0114 10:51:33.233384    4552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.887258  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:33 kubernetes-upgrade-104742 kubelet[4563]: E0114 10:51:33.982995    4563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.887605  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:34 kubernetes-upgrade-104742 kubelet[4574]: E0114 10:51:34.731184    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.887991  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:35 kubernetes-upgrade-104742 kubelet[4585]: E0114 10:51:35.505549    4585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.888356  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4597]: E0114 10:51:36.280630    4597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.888704  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:36 kubernetes-upgrade-104742 kubelet[4608]: E0114 10:51:36.984778    4608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.889052  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:37 kubernetes-upgrade-104742 kubelet[4705]: E0114 10:51:37.738365    4705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.889403  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:38 kubernetes-upgrade-104742 kubelet[4768]: E0114 10:51:38.483174    4768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.889757  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4778]: E0114 10:51:39.237603    4778 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.890103  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:39 kubernetes-upgrade-104742 kubelet[4790]: E0114 10:51:39.986112    4790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.890460  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:40 kubernetes-upgrade-104742 kubelet[4801]: E0114 10:51:40.735011    4801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.890813  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:41 kubernetes-upgrade-104742 kubelet[4812]: E0114 10:51:41.487093    4812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.891267  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4823]: E0114 10:51:42.233579    4823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.891697  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4834]: E0114 10:51:42.986964    4834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.892113  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:43 kubernetes-upgrade-104742 kubelet[4845]: E0114 10:51:43.735979    4845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.892473  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:44 kubernetes-upgrade-104742 kubelet[4858]: E0114 10:51:44.483770    4858 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.892910  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.893304  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.893659  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.894012  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.894377  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.894729  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[5057]: E0114 10:51:48.988364    5057 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.895079  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:49 kubernetes-upgrade-104742 kubelet[5068]: E0114 10:51:49.743020    5068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.895465  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:50 kubernetes-upgrade-104742 kubelet[5079]: E0114 10:51:50.484641    5079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.895830  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5090]: E0114 10:51:51.246878    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.896181  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5101]: E0114 10:51:51.988442    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.896537  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:52 kubernetes-upgrade-104742 kubelet[5112]: E0114 10:51:52.736273    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.896886  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:53 kubernetes-upgrade-104742 kubelet[5122]: E0114 10:51:53.485577    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.897236  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5133]: E0114 10:51:54.234420    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.897601  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5144]: E0114 10:51:54.985086    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.897951  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.898312  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.898669  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.899015  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.899429  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.899920  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:59 kubernetes-upgrade-104742 kubelet[5348]: E0114 10:51:59.483478    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.900278  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5359]: E0114 10:52:00.233284    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.900629  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5370]: E0114 10:52:00.984473    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.900995  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:01 kubernetes-upgrade-104742 kubelet[5381]: E0114 10:52:01.734985    5381 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.901343  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:02 kubernetes-upgrade-104742 kubelet[5392]: E0114 10:52:02.483439    5392 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.901704  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5403]: E0114 10:52:03.234816    5403 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.902057  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5414]: E0114 10:52:03.983211    5414 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.902441  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:04 kubernetes-upgrade-104742 kubelet[5425]: E0114 10:52:04.733945    5425 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.902904  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:05 kubernetes-upgrade-104742 kubelet[5436]: E0114 10:52:05.482445    5436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.903259  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.903606  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.904017  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.904402  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.904762  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.905114  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5640]: E0114 10:52:09.986061    5640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.905468  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:10 kubernetes-upgrade-104742 kubelet[5651]: E0114 10:52:10.735719    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.905839  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:11 kubernetes-upgrade-104742 kubelet[5662]: E0114 10:52:11.485154    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.906188  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5672]: E0114 10:52:12.233473    5672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.906541  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5684]: E0114 10:52:12.983242    5684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.906894  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:13 kubernetes-upgrade-104742 kubelet[5695]: E0114 10:52:13.731762    5695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.907245  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:14 kubernetes-upgrade-104742 kubelet[5706]: E0114 10:52:14.482849    5706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.907592  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5718]: E0114 10:52:15.232657    5718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.908070  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5729]: E0114 10:52:15.982605    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.908439  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.908790  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.909142  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.909508  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.909857  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:19.909977  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:19.909989  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:52:19.910097  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:52:19.910108  189998 out.go:239]   Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.910113  189998 out.go:239]   Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.910119  189998 out.go:239]   Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.910132  189998 out.go:239]   Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:19.910152  189998 out.go:239]   Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:19.910161  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:19.910171  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:52:29.910441  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:52:30.031418  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:52:30.031484  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:52:30.056039  189998 cri.go:87] found id: ""
	I0114 10:52:30.056063  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.056072  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:52:30.056082  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:52:30.056135  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:52:30.079492  189998 cri.go:87] found id: ""
	I0114 10:52:30.079516  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.079525  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:52:30.079537  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:52:30.079590  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:52:30.101860  189998 cri.go:87] found id: ""
	I0114 10:52:30.101890  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.101899  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:52:30.101907  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:52:30.101967  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:52:30.127104  189998 cri.go:87] found id: ""
	I0114 10:52:30.127129  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.127138  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:52:30.127154  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:52:30.127200  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:52:30.150197  189998 cri.go:87] found id: ""
	I0114 10:52:30.150225  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.150238  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:52:30.150245  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:52:30.150292  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:52:30.173207  189998 cri.go:87] found id: ""
	I0114 10:52:30.173232  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.173247  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:52:30.173254  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:52:30.173307  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:52:30.197670  189998 cri.go:87] found id: ""
	I0114 10:52:30.197698  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.197706  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:52:30.197715  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:52:30.197773  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:52:30.224500  189998 cri.go:87] found id: ""
	I0114 10:52:30.224526  189998 logs.go:274] 0 containers: []
	W0114 10:52:30.224536  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:52:30.224548  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:52:30.224561  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:52:30.242974  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:40 kubernetes-upgrade-104742 kubelet[4801]: E0114 10:51:40.735011    4801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.243376  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:41 kubernetes-upgrade-104742 kubelet[4812]: E0114 10:51:41.487093    4812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.243826  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4823]: E0114 10:51:42.233579    4823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.244224  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:42 kubernetes-upgrade-104742 kubelet[4834]: E0114 10:51:42.986964    4834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.244607  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:43 kubernetes-upgrade-104742 kubelet[4845]: E0114 10:51:43.735979    4845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.244966  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:44 kubernetes-upgrade-104742 kubelet[4858]: E0114 10:51:44.483770    4858 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.245344  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4868]: E0114 10:51:45.235312    4868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.245762  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:45 kubernetes-upgrade-104742 kubelet[4879]: E0114 10:51:45.987032    4879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.246123  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:46 kubernetes-upgrade-104742 kubelet[4890]: E0114 10:51:46.737934    4890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.246490  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:47 kubernetes-upgrade-104742 kubelet[4901]: E0114 10:51:47.500469    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.246846  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[4983]: E0114 10:51:48.246115    4983 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.247206  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:48 kubernetes-upgrade-104742 kubelet[5057]: E0114 10:51:48.988364    5057 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.247563  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:49 kubernetes-upgrade-104742 kubelet[5068]: E0114 10:51:49.743020    5068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.247980  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:50 kubernetes-upgrade-104742 kubelet[5079]: E0114 10:51:50.484641    5079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.248329  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5090]: E0114 10:51:51.246878    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.248737  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5101]: E0114 10:51:51.988442    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.249101  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:52 kubernetes-upgrade-104742 kubelet[5112]: E0114 10:51:52.736273    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.249459  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:53 kubernetes-upgrade-104742 kubelet[5122]: E0114 10:51:53.485577    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.249827  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5133]: E0114 10:51:54.234420    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.250177  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5144]: E0114 10:51:54.985086    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.250533  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.250879  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.251224  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.251569  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.251965  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.252310  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:59 kubernetes-upgrade-104742 kubelet[5348]: E0114 10:51:59.483478    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.252665  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5359]: E0114 10:52:00.233284    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.253036  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5370]: E0114 10:52:00.984473    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.253382  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:01 kubernetes-upgrade-104742 kubelet[5381]: E0114 10:52:01.734985    5381 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.253729  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:02 kubernetes-upgrade-104742 kubelet[5392]: E0114 10:52:02.483439    5392 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.254072  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5403]: E0114 10:52:03.234816    5403 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.254415  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5414]: E0114 10:52:03.983211    5414 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.254770  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:04 kubernetes-upgrade-104742 kubelet[5425]: E0114 10:52:04.733945    5425 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.255118  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:05 kubernetes-upgrade-104742 kubelet[5436]: E0114 10:52:05.482445    5436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.255465  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.255890  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.256253  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.256607  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.256960  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.257331  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5640]: E0114 10:52:09.986061    5640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.257682  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:10 kubernetes-upgrade-104742 kubelet[5651]: E0114 10:52:10.735719    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.258031  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:11 kubernetes-upgrade-104742 kubelet[5662]: E0114 10:52:11.485154    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.258387  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5672]: E0114 10:52:12.233473    5672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.258734  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5684]: E0114 10:52:12.983242    5684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.259077  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:13 kubernetes-upgrade-104742 kubelet[5695]: E0114 10:52:13.731762    5695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.259454  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:14 kubernetes-upgrade-104742 kubelet[5706]: E0114 10:52:14.482849    5706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.259853  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5718]: E0114 10:52:15.232657    5718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.260213  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5729]: E0114 10:52:15.982605    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.260586  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.260954  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.261305  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.261663  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.262035  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.262384  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:20 kubernetes-upgrade-104742 kubelet[5933]: E0114 10:52:20.484974    5933 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.262732  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5944]: E0114 10:52:21.232930    5944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.263087  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5956]: E0114 10:52:21.983766    5956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.263437  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:22 kubernetes-upgrade-104742 kubelet[5967]: E0114 10:52:22.735108    5967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.263824  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:23 kubernetes-upgrade-104742 kubelet[5979]: E0114 10:52:23.484265    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.264196  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[5990]: E0114 10:52:24.231836    5990 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.264689  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[6001]: E0114 10:52:24.984595    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.265188  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:25 kubernetes-upgrade-104742 kubelet[6012]: E0114 10:52:25.733630    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.265545  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.265923  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.266268  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.266616  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.266967  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:30.267238  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:52:30.267253  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:52:30.287416  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:52:30.287449  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:52:30.341257  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:52:30.341290  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:52:30.341302  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:52:30.382498  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:52:30.382530  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:52:30.409464  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:30.409489  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:52:30.409616  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:52:30.409633  189998 out.go:239]   Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.409642  189998 out.go:239]   Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.409652  189998 out.go:239]   Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.409659  189998 out.go:239]   Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:30.409669  189998 out.go:239]   Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:30.409680  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:30.409689  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:52:40.410968  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:52:40.531992  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:52:40.532066  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:52:40.556260  189998 cri.go:87] found id: ""
	I0114 10:52:40.556287  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.556296  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:52:40.556304  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:52:40.556361  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:52:40.579644  189998 cri.go:87] found id: ""
	I0114 10:52:40.579668  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.579750  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:52:40.579760  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:52:40.579808  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:52:40.603348  189998 cri.go:87] found id: ""
	I0114 10:52:40.603379  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.603388  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:52:40.603396  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:52:40.603457  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:52:40.626787  189998 cri.go:87] found id: ""
	I0114 10:52:40.626819  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.626828  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:52:40.626836  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:52:40.626886  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:52:40.651208  189998 cri.go:87] found id: ""
	I0114 10:52:40.651228  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.651235  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:52:40.651240  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:52:40.651289  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:52:40.675258  189998 cri.go:87] found id: ""
	I0114 10:52:40.675291  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.675300  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:52:40.675308  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:52:40.675357  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:52:40.699105  189998 cri.go:87] found id: ""
	I0114 10:52:40.699131  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.699140  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:52:40.699152  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:52:40.699195  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:52:40.724783  189998 cri.go:87] found id: ""
	I0114 10:52:40.724814  189998 logs.go:274] 0 containers: []
	W0114 10:52:40.724821  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:52:40.724831  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:52:40.724843  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:52:40.743774  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5090]: E0114 10:51:51.246878    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.744224  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:51 kubernetes-upgrade-104742 kubelet[5101]: E0114 10:51:51.988442    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.744608  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:52 kubernetes-upgrade-104742 kubelet[5112]: E0114 10:51:52.736273    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.744981  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:53 kubernetes-upgrade-104742 kubelet[5122]: E0114 10:51:53.485577    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.745355  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5133]: E0114 10:51:54.234420    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.745751  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:54 kubernetes-upgrade-104742 kubelet[5144]: E0114 10:51:54.985086    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.746128  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:55 kubernetes-upgrade-104742 kubelet[5156]: E0114 10:51:55.734907    5156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.746505  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:56 kubernetes-upgrade-104742 kubelet[5168]: E0114 10:51:56.483735    5168 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.746881  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5179]: E0114 10:51:57.230519    5179 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.747256  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:57 kubernetes-upgrade-104742 kubelet[5189]: E0114 10:51:57.982756    5189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.747627  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:58 kubernetes-upgrade-104742 kubelet[5286]: E0114 10:51:58.742239    5286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.748036  189998 logs.go:138] Found kubelet problem: Jan 14 10:51:59 kubernetes-upgrade-104742 kubelet[5348]: E0114 10:51:59.483478    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.748412  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5359]: E0114 10:52:00.233284    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.748807  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:00 kubernetes-upgrade-104742 kubelet[5370]: E0114 10:52:00.984473    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.749192  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:01 kubernetes-upgrade-104742 kubelet[5381]: E0114 10:52:01.734985    5381 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.749566  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:02 kubernetes-upgrade-104742 kubelet[5392]: E0114 10:52:02.483439    5392 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.749950  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5403]: E0114 10:52:03.234816    5403 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.750331  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5414]: E0114 10:52:03.983211    5414 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.750704  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:04 kubernetes-upgrade-104742 kubelet[5425]: E0114 10:52:04.733945    5425 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.751084  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:05 kubernetes-upgrade-104742 kubelet[5436]: E0114 10:52:05.482445    5436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.751456  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.752015  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.752459  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.752813  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.753160  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.753512  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5640]: E0114 10:52:09.986061    5640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.753861  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:10 kubernetes-upgrade-104742 kubelet[5651]: E0114 10:52:10.735719    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.754210  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:11 kubernetes-upgrade-104742 kubelet[5662]: E0114 10:52:11.485154    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.754567  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5672]: E0114 10:52:12.233473    5672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.754913  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5684]: E0114 10:52:12.983242    5684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.755257  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:13 kubernetes-upgrade-104742 kubelet[5695]: E0114 10:52:13.731762    5695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.755614  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:14 kubernetes-upgrade-104742 kubelet[5706]: E0114 10:52:14.482849    5706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.755995  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5718]: E0114 10:52:15.232657    5718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.756358  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5729]: E0114 10:52:15.982605    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.756708  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.757056  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.757415  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.757764  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.758138  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.758511  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:20 kubernetes-upgrade-104742 kubelet[5933]: E0114 10:52:20.484974    5933 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.758903  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5944]: E0114 10:52:21.232930    5944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.759288  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5956]: E0114 10:52:21.983766    5956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.759659  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:22 kubernetes-upgrade-104742 kubelet[5967]: E0114 10:52:22.735108    5967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.760081  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:23 kubernetes-upgrade-104742 kubelet[5979]: E0114 10:52:23.484265    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.760448  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[5990]: E0114 10:52:24.231836    5990 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.760798  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[6001]: E0114 10:52:24.984595    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.761154  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:25 kubernetes-upgrade-104742 kubelet[6012]: E0114 10:52:25.733630    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.761507  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.761855  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.762203  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.762554  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.762902  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.763310  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6169]: E0114 10:52:30.238818    6169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.763660  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6232]: E0114 10:52:30.982495    6232 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.764037  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:31 kubernetes-upgrade-104742 kubelet[6243]: E0114 10:52:31.732883    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.764485  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:32 kubernetes-upgrade-104742 kubelet[6254]: E0114 10:52:32.483524    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.764842  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6265]: E0114 10:52:33.233353    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.765187  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6275]: E0114 10:52:33.982444    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.765537  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:34 kubernetes-upgrade-104742 kubelet[6286]: E0114 10:52:34.734121    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.765903  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:35 kubernetes-upgrade-104742 kubelet[6298]: E0114 10:52:35.483214    6298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.766257  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6308]: E0114 10:52:36.232508    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.766606  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6319]: E0114 10:52:36.982677    6319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.766977  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:37 kubernetes-upgrade-104742 kubelet[6330]: E0114 10:52:37.733329    6330 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.767330  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:38 kubernetes-upgrade-104742 kubelet[6341]: E0114 10:52:38.481939    6341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.767715  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6353]: E0114 10:52:39.232999    6353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.768073  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6364]: E0114 10:52:39.982290    6364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:40.768340  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:52:40.768357  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:52:40.786831  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:52:40.786860  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:52:40.844989  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:52:40.845012  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:52:40.845068  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:52:40.884606  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:52:40.884639  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:52:40.911513  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:40.911539  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:52:40.911633  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:52:40.911644  189998 out.go:239]   Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6319]: E0114 10:52:36.982677    6319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6319]: E0114 10:52:36.982677    6319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.911649  189998 out.go:239]   Jan 14 10:52:37 kubernetes-upgrade-104742 kubelet[6330]: E0114 10:52:37.733329    6330 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:37 kubernetes-upgrade-104742 kubelet[6330]: E0114 10:52:37.733329    6330 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.911656  189998 out.go:239]   Jan 14 10:52:38 kubernetes-upgrade-104742 kubelet[6341]: E0114 10:52:38.481939    6341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:38 kubernetes-upgrade-104742 kubelet[6341]: E0114 10:52:38.481939    6341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.911669  189998 out.go:239]   Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6353]: E0114 10:52:39.232999    6353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6353]: E0114 10:52:39.232999    6353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:40.911724  189998 out.go:239]   Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6364]: E0114 10:52:39.982290    6364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6364]: E0114 10:52:39.982290    6364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:40.911732  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:40.911743  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:52:50.913518  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:52:51.031612  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:52:51.031739  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:52:51.056072  189998 cri.go:87] found id: ""
	I0114 10:52:51.056098  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.056107  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:52:51.056115  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:52:51.056167  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:52:51.079792  189998 cri.go:87] found id: ""
	I0114 10:52:51.079822  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.079831  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:52:51.079838  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:52:51.079895  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:52:51.102795  189998 cri.go:87] found id: ""
	I0114 10:52:51.102824  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.102833  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:52:51.102841  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:52:51.102882  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:52:51.125915  189998 cri.go:87] found id: ""
	I0114 10:52:51.125946  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.125956  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:52:51.125964  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:52:51.126017  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:52:51.151936  189998 cri.go:87] found id: ""
	I0114 10:52:51.151964  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.151974  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:52:51.151982  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:52:51.152026  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:52:51.175276  189998 cri.go:87] found id: ""
	I0114 10:52:51.175297  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.175303  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:52:51.175309  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:52:51.175353  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:52:51.202032  189998 cri.go:87] found id: ""
	I0114 10:52:51.202061  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.202070  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:52:51.202079  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:52:51.202133  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:52:51.232368  189998 cri.go:87] found id: ""
	I0114 10:52:51.232402  189998 logs.go:274] 0 containers: []
	W0114 10:52:51.232411  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:52:51.232423  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:52:51.232442  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:52:51.249838  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:01 kubernetes-upgrade-104742 kubelet[5381]: E0114 10:52:01.734985    5381 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.250212  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:02 kubernetes-upgrade-104742 kubelet[5392]: E0114 10:52:02.483439    5392 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.250583  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5403]: E0114 10:52:03.234816    5403 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.251081  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:03 kubernetes-upgrade-104742 kubelet[5414]: E0114 10:52:03.983211    5414 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.251502  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:04 kubernetes-upgrade-104742 kubelet[5425]: E0114 10:52:04.733945    5425 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.252047  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:05 kubernetes-upgrade-104742 kubelet[5436]: E0114 10:52:05.482445    5436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.252504  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5447]: E0114 10:52:06.235307    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.252852  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:06 kubernetes-upgrade-104742 kubelet[5459]: E0114 10:52:06.984537    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.253203  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:07 kubernetes-upgrade-104742 kubelet[5469]: E0114 10:52:07.736201    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.253595  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:08 kubernetes-upgrade-104742 kubelet[5480]: E0114 10:52:08.483518    5480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.253950  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5577]: E0114 10:52:09.236012    5577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.254300  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:09 kubernetes-upgrade-104742 kubelet[5640]: E0114 10:52:09.986061    5640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.254655  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:10 kubernetes-upgrade-104742 kubelet[5651]: E0114 10:52:10.735719    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.254999  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:11 kubernetes-upgrade-104742 kubelet[5662]: E0114 10:52:11.485154    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.255356  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5672]: E0114 10:52:12.233473    5672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.255730  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5684]: E0114 10:52:12.983242    5684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.256089  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:13 kubernetes-upgrade-104742 kubelet[5695]: E0114 10:52:13.731762    5695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.256440  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:14 kubernetes-upgrade-104742 kubelet[5706]: E0114 10:52:14.482849    5706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.256800  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5718]: E0114 10:52:15.232657    5718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.257146  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5729]: E0114 10:52:15.982605    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.257515  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.257880  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.258230  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.258601  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.258969  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.259348  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:20 kubernetes-upgrade-104742 kubelet[5933]: E0114 10:52:20.484974    5933 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.259749  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5944]: E0114 10:52:21.232930    5944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.260166  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5956]: E0114 10:52:21.983766    5956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.260703  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:22 kubernetes-upgrade-104742 kubelet[5967]: E0114 10:52:22.735108    5967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.261080  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:23 kubernetes-upgrade-104742 kubelet[5979]: E0114 10:52:23.484265    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.261465  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[5990]: E0114 10:52:24.231836    5990 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.261846  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[6001]: E0114 10:52:24.984595    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.262256  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:25 kubernetes-upgrade-104742 kubelet[6012]: E0114 10:52:25.733630    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.262611  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.263040  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.263794  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.264372  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.264759  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.265107  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6169]: E0114 10:52:30.238818    6169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.265461  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6232]: E0114 10:52:30.982495    6232 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.265825  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:31 kubernetes-upgrade-104742 kubelet[6243]: E0114 10:52:31.732883    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.266179  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:32 kubernetes-upgrade-104742 kubelet[6254]: E0114 10:52:32.483524    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.266525  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6265]: E0114 10:52:33.233353    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.266895  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6275]: E0114 10:52:33.982444    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.267253  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:34 kubernetes-upgrade-104742 kubelet[6286]: E0114 10:52:34.734121    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.267608  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:35 kubernetes-upgrade-104742 kubelet[6298]: E0114 10:52:35.483214    6298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.268016  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6308]: E0114 10:52:36.232508    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.268370  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6319]: E0114 10:52:36.982677    6319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.268721  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:37 kubernetes-upgrade-104742 kubelet[6330]: E0114 10:52:37.733329    6330 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.269071  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:38 kubernetes-upgrade-104742 kubelet[6341]: E0114 10:52:38.481939    6341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.269422  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6353]: E0114 10:52:39.232999    6353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.269772  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6364]: E0114 10:52:39.982290    6364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.270121  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:40 kubernetes-upgrade-104742 kubelet[6459]: E0114 10:52:40.740796    6459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.270489  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:41 kubernetes-upgrade-104742 kubelet[6522]: E0114 10:52:41.483805    6522 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.270851  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:42 kubernetes-upgrade-104742 kubelet[6534]: E0114 10:52:42.233948    6534 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.271268  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:42 kubernetes-upgrade-104742 kubelet[6546]: E0114 10:52:42.982420    6546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.271646  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:43 kubernetes-upgrade-104742 kubelet[6556]: E0114 10:52:43.733330    6556 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.272079  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:44 kubernetes-upgrade-104742 kubelet[6567]: E0114 10:52:44.483992    6567 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.272446  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:45 kubernetes-upgrade-104742 kubelet[6578]: E0114 10:52:45.233299    6578 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.272814  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:45 kubernetes-upgrade-104742 kubelet[6589]: E0114 10:52:45.988526    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.273176  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:46 kubernetes-upgrade-104742 kubelet[6600]: E0114 10:52:46.732424    6600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.273551  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:47 kubernetes-upgrade-104742 kubelet[6610]: E0114 10:52:47.483911    6610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.273904  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6621]: E0114 10:52:48.232320    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.274265  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6632]: E0114 10:52:48.983945    6632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.274632  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:49 kubernetes-upgrade-104742 kubelet[6643]: E0114 10:52:49.738644    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.274997  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:50 kubernetes-upgrade-104742 kubelet[6654]: E0114 10:52:50.484618    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.275350  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6749]: E0114 10:52:51.236733    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:51.275470  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:52:51.275486  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:52:51.294441  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:52:51.294466  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:52:51.350226  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:52:51.350249  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:52:51.350262  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:52:51.387970  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:52:51.388002  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:52:51.414226  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:51.414251  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:52:51.414347  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:52:51.414361  189998 out.go:239]   Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6621]: E0114 10:52:48.232320    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6621]: E0114 10:52:48.232320    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.414366  189998 out.go:239]   Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6632]: E0114 10:52:48.983945    6632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6632]: E0114 10:52:48.983945    6632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.414373  189998 out.go:239]   Jan 14 10:52:49 kubernetes-upgrade-104742 kubelet[6643]: E0114 10:52:49.738644    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:49 kubernetes-upgrade-104742 kubelet[6643]: E0114 10:52:49.738644    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.414382  189998 out.go:239]   Jan 14 10:52:50 kubernetes-upgrade-104742 kubelet[6654]: E0114 10:52:50.484618    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:50 kubernetes-upgrade-104742 kubelet[6654]: E0114 10:52:50.484618    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:52:51.414389  189998 out.go:239]   Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6749]: E0114 10:52:51.236733    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6749]: E0114 10:52:51.236733    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:52:51.414396  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:52:51.414415  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:53:01.416116  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:53:01.531919  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:53:01.531987  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:53:01.555582  189998 cri.go:87] found id: ""
	I0114 10:53:01.555605  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.555611  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:53:01.555617  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:53:01.555666  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:53:01.578675  189998 cri.go:87] found id: ""
	I0114 10:53:01.578704  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.578712  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:53:01.578720  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:53:01.578775  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:53:01.602081  189998 cri.go:87] found id: ""
	I0114 10:53:01.602107  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.602121  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:53:01.602129  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:53:01.602185  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:53:01.625185  189998 cri.go:87] found id: ""
	I0114 10:53:01.625212  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.625220  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:53:01.625226  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:53:01.625280  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:53:01.648734  189998 cri.go:87] found id: ""
	I0114 10:53:01.648761  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.648770  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:53:01.648793  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:53:01.648851  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:53:01.671953  189998 cri.go:87] found id: ""
	I0114 10:53:01.671974  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.671981  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:53:01.671988  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:53:01.672041  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:53:01.695933  189998 cri.go:87] found id: ""
	I0114 10:53:01.695956  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.695965  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:53:01.695978  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:53:01.696030  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:53:01.723490  189998 cri.go:87] found id: ""
	I0114 10:53:01.723517  189998 logs.go:274] 0 containers: []
	W0114 10:53:01.723534  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:53:01.723546  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:53:01.723561  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:53:01.742230  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:53:01.742262  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:53:01.797491  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:53:01.797510  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:53:01.797526  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:53:01.834664  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:53:01.834717  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:53:01.864647  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:53:01.864681  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:53:01.881792  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5672]: E0114 10:52:12.233473    5672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.882163  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:12 kubernetes-upgrade-104742 kubelet[5684]: E0114 10:52:12.983242    5684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.882516  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:13 kubernetes-upgrade-104742 kubelet[5695]: E0114 10:52:13.731762    5695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.882887  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:14 kubernetes-upgrade-104742 kubelet[5706]: E0114 10:52:14.482849    5706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.883247  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5718]: E0114 10:52:15.232657    5718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.883595  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:15 kubernetes-upgrade-104742 kubelet[5729]: E0114 10:52:15.982605    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.883995  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:16 kubernetes-upgrade-104742 kubelet[5741]: E0114 10:52:16.732973    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.884349  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:17 kubernetes-upgrade-104742 kubelet[5752]: E0114 10:52:17.482284    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.884711  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5763]: E0114 10:52:18.231585    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.885064  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:18 kubernetes-upgrade-104742 kubelet[5774]: E0114 10:52:18.984819    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.885413  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:19 kubernetes-upgrade-104742 kubelet[5864]: E0114 10:52:19.738971    5864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.885903  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:20 kubernetes-upgrade-104742 kubelet[5933]: E0114 10:52:20.484974    5933 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.886402  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5944]: E0114 10:52:21.232930    5944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.886821  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:21 kubernetes-upgrade-104742 kubelet[5956]: E0114 10:52:21.983766    5956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.887261  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:22 kubernetes-upgrade-104742 kubelet[5967]: E0114 10:52:22.735108    5967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.887640  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:23 kubernetes-upgrade-104742 kubelet[5979]: E0114 10:52:23.484265    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.888047  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[5990]: E0114 10:52:24.231836    5990 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.888419  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[6001]: E0114 10:52:24.984595    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.888774  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:25 kubernetes-upgrade-104742 kubelet[6012]: E0114 10:52:25.733630    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.889120  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.889471  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.889820  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.890179  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.890543  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.890890  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6169]: E0114 10:52:30.238818    6169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.891262  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6232]: E0114 10:52:30.982495    6232 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.891637  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:31 kubernetes-upgrade-104742 kubelet[6243]: E0114 10:52:31.732883    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.892031  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:32 kubernetes-upgrade-104742 kubelet[6254]: E0114 10:52:32.483524    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.892385  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6265]: E0114 10:52:33.233353    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.892759  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6275]: E0114 10:52:33.982444    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.893116  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:34 kubernetes-upgrade-104742 kubelet[6286]: E0114 10:52:34.734121    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.893498  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:35 kubernetes-upgrade-104742 kubelet[6298]: E0114 10:52:35.483214    6298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.893854  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6308]: E0114 10:52:36.232508    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.894206  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6319]: E0114 10:52:36.982677    6319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.894581  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:37 kubernetes-upgrade-104742 kubelet[6330]: E0114 10:52:37.733329    6330 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.894932  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:38 kubernetes-upgrade-104742 kubelet[6341]: E0114 10:52:38.481939    6341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.895307  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6353]: E0114 10:52:39.232999    6353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.895656  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6364]: E0114 10:52:39.982290    6364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.896058  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:40 kubernetes-upgrade-104742 kubelet[6459]: E0114 10:52:40.740796    6459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.896430  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:41 kubernetes-upgrade-104742 kubelet[6522]: E0114 10:52:41.483805    6522 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.896791  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:42 kubernetes-upgrade-104742 kubelet[6534]: E0114 10:52:42.233948    6534 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.897155  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:42 kubernetes-upgrade-104742 kubelet[6546]: E0114 10:52:42.982420    6546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.897517  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:43 kubernetes-upgrade-104742 kubelet[6556]: E0114 10:52:43.733330    6556 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.897895  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:44 kubernetes-upgrade-104742 kubelet[6567]: E0114 10:52:44.483992    6567 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.898248  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:45 kubernetes-upgrade-104742 kubelet[6578]: E0114 10:52:45.233299    6578 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.898618  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:45 kubernetes-upgrade-104742 kubelet[6589]: E0114 10:52:45.988526    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.898969  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:46 kubernetes-upgrade-104742 kubelet[6600]: E0114 10:52:46.732424    6600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.899324  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:47 kubernetes-upgrade-104742 kubelet[6610]: E0114 10:52:47.483911    6610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.899720  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6621]: E0114 10:52:48.232320    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.900069  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6632]: E0114 10:52:48.983945    6632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.900425  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:49 kubernetes-upgrade-104742 kubelet[6643]: E0114 10:52:49.738644    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.900770  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:50 kubernetes-upgrade-104742 kubelet[6654]: E0114 10:52:50.484618    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.901115  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6749]: E0114 10:52:51.236733    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.901463  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6809]: E0114 10:52:51.983843    6809 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.901812  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:52 kubernetes-upgrade-104742 kubelet[6820]: E0114 10:52:52.733853    6820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.902161  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:53 kubernetes-upgrade-104742 kubelet[6831]: E0114 10:52:53.484557    6831 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.902514  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:54 kubernetes-upgrade-104742 kubelet[6842]: E0114 10:52:54.234526    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.902863  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:54 kubernetes-upgrade-104742 kubelet[6853]: E0114 10:52:54.983655    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.903210  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:55 kubernetes-upgrade-104742 kubelet[6864]: E0114 10:52:55.733858    6864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.903567  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:56 kubernetes-upgrade-104742 kubelet[6875]: E0114 10:52:56.484564    6875 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.903954  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:57 kubernetes-upgrade-104742 kubelet[6886]: E0114 10:52:57.232756    6886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.904308  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:57 kubernetes-upgrade-104742 kubelet[6897]: E0114 10:52:57.985242    6897 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.904658  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:58 kubernetes-upgrade-104742 kubelet[6908]: E0114 10:52:58.735578    6908 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.905009  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:59 kubernetes-upgrade-104742 kubelet[6919]: E0114 10:52:59.484550    6919 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.905367  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6931]: E0114 10:53:00.233025    6931 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.905732  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6943]: E0114 10:53:00.983153    6943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.906078  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:01 kubernetes-upgrade-104742 kubelet[7040]: E0114 10:53:01.741193    7040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:53:01.906196  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:53:01.906208  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:53:01.906322  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:53:01.906336  189998 out.go:239]   Jan 14 10:52:58 kubernetes-upgrade-104742 kubelet[6908]: E0114 10:52:58.735578    6908 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:58 kubernetes-upgrade-104742 kubelet[6908]: E0114 10:52:58.735578    6908 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.906346  189998 out.go:239]   Jan 14 10:52:59 kubernetes-upgrade-104742 kubelet[6919]: E0114 10:52:59.484550    6919 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:52:59 kubernetes-upgrade-104742 kubelet[6919]: E0114 10:52:59.484550    6919 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.906355  189998 out.go:239]   Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6931]: E0114 10:53:00.233025    6931 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6931]: E0114 10:53:00.233025    6931 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.906380  189998 out.go:239]   Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6943]: E0114 10:53:00.983153    6943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6943]: E0114 10:53:00.983153    6943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:01.906394  189998 out.go:239]   Jan 14 10:53:01 kubernetes-upgrade-104742 kubelet[7040]: E0114 10:53:01.741193    7040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:01 kubernetes-upgrade-104742 kubelet[7040]: E0114 10:53:01.741193    7040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:53:01.906399  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:53:01.906408  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:53:11.907015  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:53:12.032110  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:53:12.032178  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:53:12.056228  189998 cri.go:87] found id: ""
	I0114 10:53:12.056252  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.056259  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:53:12.056281  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:53:12.056344  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:53:12.079312  189998 cri.go:87] found id: ""
	I0114 10:53:12.079341  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.079349  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:53:12.079355  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:53:12.079407  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:53:12.102832  189998 cri.go:87] found id: ""
	I0114 10:53:12.102856  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.102863  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:53:12.102871  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:53:12.102914  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:53:12.125967  189998 cri.go:87] found id: ""
	I0114 10:53:12.125991  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.125998  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:53:12.126004  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:53:12.126054  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:53:12.149337  189998 cri.go:87] found id: ""
	I0114 10:53:12.149358  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.149365  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:53:12.149371  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:53:12.149413  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:53:12.173302  189998 cri.go:87] found id: ""
	I0114 10:53:12.173329  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.173336  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:53:12.173343  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:53:12.173388  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:53:12.199283  189998 cri.go:87] found id: ""
	I0114 10:53:12.199308  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.199317  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:53:12.199325  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:53:12.199386  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:53:12.224625  189998 cri.go:87] found id: ""
	I0114 10:53:12.224648  189998 logs.go:274] 0 containers: []
	W0114 10:53:12.224657  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:53:12.224669  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:53:12.224684  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:53:12.281740  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:53:12.281766  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:53:12.281778  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:53:12.319624  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:53:12.319658  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 10:53:12.346459  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:53:12.346493  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:53:12.365142  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:22 kubernetes-upgrade-104742 kubelet[5967]: E0114 10:52:22.735108    5967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.365625  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:23 kubernetes-upgrade-104742 kubelet[5979]: E0114 10:52:23.484265    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.366160  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[5990]: E0114 10:52:24.231836    5990 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.366524  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:24 kubernetes-upgrade-104742 kubelet[6001]: E0114 10:52:24.984595    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.366886  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:25 kubernetes-upgrade-104742 kubelet[6012]: E0114 10:52:25.733630    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.367273  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:26 kubernetes-upgrade-104742 kubelet[6025]: E0114 10:52:26.483432    6025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.367706  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6037]: E0114 10:52:27.234698    6037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.368255  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:27 kubernetes-upgrade-104742 kubelet[6048]: E0114 10:52:27.982655    6048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.368613  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:28 kubernetes-upgrade-104742 kubelet[6059]: E0114 10:52:28.735468    6059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.369001  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:29 kubernetes-upgrade-104742 kubelet[6070]: E0114 10:52:29.483531    6070 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.369353  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6169]: E0114 10:52:30.238818    6169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.369705  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:30 kubernetes-upgrade-104742 kubelet[6232]: E0114 10:52:30.982495    6232 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.370054  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:31 kubernetes-upgrade-104742 kubelet[6243]: E0114 10:52:31.732883    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.370406  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:32 kubernetes-upgrade-104742 kubelet[6254]: E0114 10:52:32.483524    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.370768  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6265]: E0114 10:52:33.233353    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.371112  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:33 kubernetes-upgrade-104742 kubelet[6275]: E0114 10:52:33.982444    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.371464  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:34 kubernetes-upgrade-104742 kubelet[6286]: E0114 10:52:34.734121    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.371852  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:35 kubernetes-upgrade-104742 kubelet[6298]: E0114 10:52:35.483214    6298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.372202  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6308]: E0114 10:52:36.232508    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.372562  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:36 kubernetes-upgrade-104742 kubelet[6319]: E0114 10:52:36.982677    6319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.372917  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:37 kubernetes-upgrade-104742 kubelet[6330]: E0114 10:52:37.733329    6330 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.373262  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:38 kubernetes-upgrade-104742 kubelet[6341]: E0114 10:52:38.481939    6341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.373610  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6353]: E0114 10:52:39.232999    6353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.373981  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:39 kubernetes-upgrade-104742 kubelet[6364]: E0114 10:52:39.982290    6364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.374346  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:40 kubernetes-upgrade-104742 kubelet[6459]: E0114 10:52:40.740796    6459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.374703  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:41 kubernetes-upgrade-104742 kubelet[6522]: E0114 10:52:41.483805    6522 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.375060  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:42 kubernetes-upgrade-104742 kubelet[6534]: E0114 10:52:42.233948    6534 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.375436  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:42 kubernetes-upgrade-104742 kubelet[6546]: E0114 10:52:42.982420    6546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.375824  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:43 kubernetes-upgrade-104742 kubelet[6556]: E0114 10:52:43.733330    6556 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.376209  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:44 kubernetes-upgrade-104742 kubelet[6567]: E0114 10:52:44.483992    6567 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.376630  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:45 kubernetes-upgrade-104742 kubelet[6578]: E0114 10:52:45.233299    6578 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.377048  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:45 kubernetes-upgrade-104742 kubelet[6589]: E0114 10:52:45.988526    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.377588  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:46 kubernetes-upgrade-104742 kubelet[6600]: E0114 10:52:46.732424    6600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.378097  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:47 kubernetes-upgrade-104742 kubelet[6610]: E0114 10:52:47.483911    6610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.378630  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6621]: E0114 10:52:48.232320    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.379077  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:48 kubernetes-upgrade-104742 kubelet[6632]: E0114 10:52:48.983945    6632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.379462  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:49 kubernetes-upgrade-104742 kubelet[6643]: E0114 10:52:49.738644    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.379874  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:50 kubernetes-upgrade-104742 kubelet[6654]: E0114 10:52:50.484618    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.380426  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6749]: E0114 10:52:51.236733    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.380849  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:51 kubernetes-upgrade-104742 kubelet[6809]: E0114 10:52:51.983843    6809 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.381222  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:52 kubernetes-upgrade-104742 kubelet[6820]: E0114 10:52:52.733853    6820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.381597  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:53 kubernetes-upgrade-104742 kubelet[6831]: E0114 10:52:53.484557    6831 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.381971  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:54 kubernetes-upgrade-104742 kubelet[6842]: E0114 10:52:54.234526    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.382346  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:54 kubernetes-upgrade-104742 kubelet[6853]: E0114 10:52:54.983655    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.382725  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:55 kubernetes-upgrade-104742 kubelet[6864]: E0114 10:52:55.733858    6864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.383099  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:56 kubernetes-upgrade-104742 kubelet[6875]: E0114 10:52:56.484564    6875 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.383477  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:57 kubernetes-upgrade-104742 kubelet[6886]: E0114 10:52:57.232756    6886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.383911  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:57 kubernetes-upgrade-104742 kubelet[6897]: E0114 10:52:57.985242    6897 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.384326  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:58 kubernetes-upgrade-104742 kubelet[6908]: E0114 10:52:58.735578    6908 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.384703  189998 logs.go:138] Found kubelet problem: Jan 14 10:52:59 kubernetes-upgrade-104742 kubelet[6919]: E0114 10:52:59.484550    6919 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.385074  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6931]: E0114 10:53:00.233025    6931 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.385457  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:00 kubernetes-upgrade-104742 kubelet[6943]: E0114 10:53:00.983153    6943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.385843  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:01 kubernetes-upgrade-104742 kubelet[7040]: E0114 10:53:01.741193    7040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.386218  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:02 kubernetes-upgrade-104742 kubelet[7104]: E0114 10:53:02.483971    7104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.386595  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:03 kubernetes-upgrade-104742 kubelet[7115]: E0114 10:53:03.233205    7115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.386967  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:03 kubernetes-upgrade-104742 kubelet[7126]: E0114 10:53:03.986358    7126 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.387343  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:04 kubernetes-upgrade-104742 kubelet[7137]: E0114 10:53:04.736118    7137 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.387735  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:05 kubernetes-upgrade-104742 kubelet[7148]: E0114 10:53:05.483024    7148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.388119  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:06 kubernetes-upgrade-104742 kubelet[7159]: E0114 10:53:06.233156    7159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.388497  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:06 kubernetes-upgrade-104742 kubelet[7169]: E0114 10:53:06.982417    7169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.388874  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:07 kubernetes-upgrade-104742 kubelet[7180]: E0114 10:53:07.734999    7180 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.389268  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:08 kubernetes-upgrade-104742 kubelet[7191]: E0114 10:53:08.481869    7191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.389651  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:09 kubernetes-upgrade-104742 kubelet[7203]: E0114 10:53:09.236284    7203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.390092  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:09 kubernetes-upgrade-104742 kubelet[7215]: E0114 10:53:09.983793    7215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.390447  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:10 kubernetes-upgrade-104742 kubelet[7226]: E0114 10:53:10.737005    7226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.390815  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:11 kubernetes-upgrade-104742 kubelet[7237]: E0114 10:53:11.484092    7237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.391168  189998 logs.go:138] Found kubelet problem: Jan 14 10:53:12 kubernetes-upgrade-104742 kubelet[7335]: E0114 10:53:12.238342    7335 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:53:12.391285  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:53:12.391300  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:53:12.408538  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:53:12.408565  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:53:12.408670  189998 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0114 10:53:12.408681  189998 out.go:239]   Jan 14 10:53:09 kubernetes-upgrade-104742 kubelet[7203]: E0114 10:53:09.236284    7203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:09 kubernetes-upgrade-104742 kubelet[7203]: E0114 10:53:09.236284    7203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.408687  189998 out.go:239]   Jan 14 10:53:09 kubernetes-upgrade-104742 kubelet[7215]: E0114 10:53:09.983793    7215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:09 kubernetes-upgrade-104742 kubelet[7215]: E0114 10:53:09.983793    7215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.408694  189998 out.go:239]   Jan 14 10:53:10 kubernetes-upgrade-104742 kubelet[7226]: E0114 10:53:10.737005    7226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:10 kubernetes-upgrade-104742 kubelet[7226]: E0114 10:53:10.737005    7226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.408700  189998 out.go:239]   Jan 14 10:53:11 kubernetes-upgrade-104742 kubelet[7237]: E0114 10:53:11.484092    7237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:11 kubernetes-upgrade-104742 kubelet[7237]: E0114 10:53:11.484092    7237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:53:12.408708  189998 out.go:239]   Jan 14 10:53:12 kubernetes-upgrade-104742 kubelet[7335]: E0114 10:53:12.238342    7335 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 14 10:53:12 kubernetes-upgrade-104742 kubelet[7335]: E0114 10:53:12.238342    7335 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:53:12.408714  189998 out.go:309] Setting ErrFile to fd 2...
	I0114 10:53:12.408720  189998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:53:22.409614  189998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:53:22.418156  189998 kubeadm.go:631] restartCluster took 4m10.454107776s
	W0114 10:53:22.418302  189998 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0114 10:53:22.418333  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0114 10:53:24.219041  189998 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.8006842s)
	I0114 10:53:24.219103  189998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:53:24.228853  189998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:53:24.236149  189998 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:53:24.236212  189998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:53:24.243237  189998 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:53:24.243287  189998 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:53:24.281635  189998 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 10:53:24.281726  189998 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:53:24.310000  189998 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:53:24.310111  189998 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:53:24.310153  189998 kubeadm.go:317] OS: Linux
	I0114 10:53:24.310192  189998 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:53:24.310265  189998 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:53:24.310329  189998 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:53:24.310421  189998 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:53:24.310482  189998 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:53:24.310524  189998 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:53:24.310602  189998 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:53:24.310685  189998 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:53:24.310761  189998 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:53:24.375374  189998 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 10:53:24.375499  189998 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 10:53:24.375611  189998 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 10:53:24.494276  189998 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:53:24.497474  189998 out.go:204]   - Generating certificates and keys ...
	I0114 10:53:24.497587  189998 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 10:53:24.497668  189998 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 10:53:24.497764  189998 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 10:53:24.497823  189998 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 10:53:24.497886  189998 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 10:53:24.497946  189998 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 10:53:24.498015  189998 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 10:53:24.498085  189998 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 10:53:24.498174  189998 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 10:53:24.498285  189998 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 10:53:24.498355  189998 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 10:53:24.498444  189998 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:53:24.801453  189998 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 10:53:24.874662  189998 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 10:53:25.046100  189998 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:53:25.100773  189998 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:53:25.112056  189998 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:53:25.112711  189998 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:53:25.112780  189998 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 10:53:25.191190  189998 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:53:25.194537  189998 out.go:204]   - Booting up control plane ...
	I0114 10:53:25.194689  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:53:25.194804  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:53:25.195080  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:53:25.195792  189998 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:53:25.197399  189998 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 10:54:05.197818  189998 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 10:54:05.198076  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:54:05.198353  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:54:10.199325  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:54:10.199587  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:54:20.200192  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:54:20.200451  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:54:40.201621  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:54:40.201868  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:55:20.202567  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:55:20.202849  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:55:20.202873  189998 kubeadm.go:317] 
	I0114 10:55:20.202951  189998 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 10:55:20.203030  189998 kubeadm.go:317] 	timed out waiting for the condition
	I0114 10:55:20.203049  189998 kubeadm.go:317] 
	I0114 10:55:20.203106  189998 kubeadm.go:317] This error is likely caused by:
	I0114 10:55:20.203167  189998 kubeadm.go:317] 	- The kubelet is not running
	I0114 10:55:20.203314  189998 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 10:55:20.203326  189998 kubeadm.go:317] 
	I0114 10:55:20.203455  189998 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 10:55:20.203502  189998 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 10:55:20.203544  189998 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 10:55:20.203555  189998 kubeadm.go:317] 
	I0114 10:55:20.203716  189998 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 10:55:20.203834  189998 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0114 10:55:20.203905  189998 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0114 10:55:20.204016  189998 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0114 10:55:20.204127  189998 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 10:55:20.204268  189998 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0114 10:55:20.205525  189998 kubeadm.go:317] W0114 10:53:24.276152    8588 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:55:20.205784  189998 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:55:20.205892  189998 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:55:20.205994  189998 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 10:55:20.206104  189998 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 10:55:20.206360  189998 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:53:24.276152    8588 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:53:24.276152    8588 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 10:55:20.206415  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0114 10:55:22.051702  189998 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.845256741s)
	I0114 10:55:22.051768  189998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:55:22.061256  189998 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:55:22.061304  189998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:55:22.068276  189998 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:55:22.068325  189998 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:55:22.100836  189998 kubeadm.go:317] W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:55:22.132315  189998 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:55:22.191709  189998 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:57:17.984863  189998 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 10:57:17.985003  189998 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 10:57:17.987514  189998 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 10:57:17.987633  189998 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:57:17.987808  189998 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:57:17.987916  189998 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:57:17.987971  189998 kubeadm.go:317] OS: Linux
	I0114 10:57:17.988035  189998 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:57:17.988109  189998 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:57:17.988196  189998 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:57:17.988277  189998 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:57:17.988346  189998 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:57:17.988410  189998 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:57:17.988470  189998 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:57:17.988533  189998 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:57:17.988623  189998 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:57:17.988722  189998 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 10:57:17.988849  189998 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 10:57:17.988972  189998 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 10:57:17.989063  189998 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:57:17.990409  189998 out.go:204]   - Generating certificates and keys ...
	I0114 10:57:17.990497  189998 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 10:57:17.990593  189998 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 10:57:17.990696  189998 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 10:57:17.990786  189998 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 10:57:17.990894  189998 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 10:57:17.990967  189998 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 10:57:17.991056  189998 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 10:57:17.991142  189998 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 10:57:17.991256  189998 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 10:57:17.991355  189998 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 10:57:17.991414  189998 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 10:57:17.991478  189998 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:57:17.991521  189998 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 10:57:17.991560  189998 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 10:57:17.991612  189998 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:57:17.991654  189998 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:57:17.991820  189998 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:57:17.991933  189998 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:57:17.991986  189998 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 10:57:17.992063  189998 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:57:17.993800  189998 out.go:204]   - Booting up control plane ...
	I0114 10:57:17.993901  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:57:17.993986  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:57:17.994069  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:57:17.994175  189998 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:57:17.994339  189998 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 10:57:17.994382  189998 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 10:57:17.994456  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.994616  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.994695  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.994913  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995013  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.995168  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995250  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.995435  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995525  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.995858  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995875  189998 kubeadm.go:317] 
	I0114 10:57:17.995932  189998 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 10:57:17.995992  189998 kubeadm.go:317] 	timed out waiting for the condition
	I0114 10:57:17.995998  189998 kubeadm.go:317] 
	I0114 10:57:17.996035  189998 kubeadm.go:317] This error is likely caused by:
	I0114 10:57:17.996097  189998 kubeadm.go:317] 	- The kubelet is not running
	I0114 10:57:17.996256  189998 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 10:57:17.996280  189998 kubeadm.go:317] 
	I0114 10:57:17.996416  189998 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 10:57:17.996462  189998 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 10:57:17.996502  189998 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 10:57:17.996517  189998 kubeadm.go:317] 
	I0114 10:57:17.996651  189998 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 10:57:17.996744  189998 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0114 10:57:17.996865  189998 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0114 10:57:17.996975  189998 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0114 10:57:17.997057  189998 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 10:57:17.997184  189998 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0114 10:57:17.997221  189998 kubeadm.go:398] StartCluster complete in 8m6.064712039s
	I0114 10:57:17.997262  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:57:17.997320  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:57:18.023728  189998 cri.go:87] found id: ""
	I0114 10:57:18.023751  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.023758  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:57:18.023764  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:57:18.023819  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:57:18.049043  189998 cri.go:87] found id: ""
	I0114 10:57:18.049067  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.049085  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:57:18.049092  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:57:18.049153  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:57:18.076100  189998 cri.go:87] found id: ""
	I0114 10:57:18.076133  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.076140  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:57:18.076154  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:57:18.076224  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:57:18.100358  189998 cri.go:87] found id: ""
	I0114 10:57:18.100378  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.100384  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:57:18.100389  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:57:18.100428  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:57:18.123283  189998 cri.go:87] found id: ""
	I0114 10:57:18.123310  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.123318  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:57:18.123325  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:57:18.123386  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:57:18.146331  189998 cri.go:87] found id: ""
	I0114 10:57:18.146357  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.146364  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:57:18.146372  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:57:18.146420  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:57:18.170499  189998 cri.go:87] found id: ""
	I0114 10:57:18.170523  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.170538  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:57:18.170546  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:57:18.170597  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:57:18.195071  189998 cri.go:87] found id: ""
	I0114 10:57:18.195093  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.195100  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:57:18.195111  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:57:18.195125  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:57:18.212387  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12529]: E0114 10:56:28.256460   12529 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.212758  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12539]: E0114 10:56:28.983861   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.213115  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:29 kubernetes-upgrade-104742 kubelet[12551]: E0114 10:56:29.737789   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.213474  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:30 kubernetes-upgrade-104742 kubelet[12562]: E0114 10:56:30.501487   12562 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.213819  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:31 kubernetes-upgrade-104742 kubelet[12573]: E0114 10:56:31.281858   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.214177  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:31 kubernetes-upgrade-104742 kubelet[12584]: E0114 10:56:31.994205   12584 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.214529  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:32 kubernetes-upgrade-104742 kubelet[12595]: E0114 10:56:32.747308   12595 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.214873  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:33 kubernetes-upgrade-104742 kubelet[12606]: E0114 10:56:33.490167   12606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.215223  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:34 kubernetes-upgrade-104742 kubelet[12617]: E0114 10:56:34.235997   12617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.215582  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:34 kubernetes-upgrade-104742 kubelet[12628]: E0114 10:56:34.985443   12628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.215980  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:35 kubernetes-upgrade-104742 kubelet[12639]: E0114 10:56:35.745594   12639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.216331  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:36 kubernetes-upgrade-104742 kubelet[12650]: E0114 10:56:36.498581   12650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.216676  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:37 kubernetes-upgrade-104742 kubelet[12662]: E0114 10:56:37.249571   12662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.217031  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:38 kubernetes-upgrade-104742 kubelet[12672]: E0114 10:56:38.008471   12672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.217382  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:38 kubernetes-upgrade-104742 kubelet[12682]: E0114 10:56:38.763526   12682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.217737  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:39 kubernetes-upgrade-104742 kubelet[12693]: E0114 10:56:39.492292   12693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.218085  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:40 kubernetes-upgrade-104742 kubelet[12704]: E0114 10:56:40.251896   12704 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.218440  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:40 kubernetes-upgrade-104742 kubelet[12714]: E0114 10:56:40.991722   12714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.218787  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:41 kubernetes-upgrade-104742 kubelet[12725]: E0114 10:56:41.762343   12725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.219140  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:42 kubernetes-upgrade-104742 kubelet[12734]: E0114 10:56:42.537442   12734 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.219496  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:43 kubernetes-upgrade-104742 kubelet[12743]: E0114 10:56:43.263426   12743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.219862  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:44 kubernetes-upgrade-104742 kubelet[12752]: E0114 10:56:44.028905   12752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.220226  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:44 kubernetes-upgrade-104742 kubelet[12763]: E0114 10:56:44.790619   12763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.220579  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:45 kubernetes-upgrade-104742 kubelet[12773]: E0114 10:56:45.497745   12773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.220922  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:46 kubernetes-upgrade-104742 kubelet[12784]: E0114 10:56:46.240817   12784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.221280  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:46 kubernetes-upgrade-104742 kubelet[12794]: E0114 10:56:46.985491   12794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.221634  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:47 kubernetes-upgrade-104742 kubelet[12806]: E0114 10:56:47.733518   12806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.221977  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:48 kubernetes-upgrade-104742 kubelet[12817]: E0114 10:56:48.494874   12817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.222336  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:49 kubernetes-upgrade-104742 kubelet[12829]: E0114 10:56:49.285800   12829 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.222679  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:49 kubernetes-upgrade-104742 kubelet[12839]: E0114 10:56:49.991740   12839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.223023  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:50 kubernetes-upgrade-104742 kubelet[12850]: E0114 10:56:50.744969   12850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.223376  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:51 kubernetes-upgrade-104742 kubelet[12862]: E0114 10:56:51.507538   12862 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.223755  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:52 kubernetes-upgrade-104742 kubelet[12872]: E0114 10:56:52.265215   12872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.224105  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:53 kubernetes-upgrade-104742 kubelet[12882]: E0114 10:56:53.002039   12882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.224459  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:53 kubernetes-upgrade-104742 kubelet[12892]: E0114 10:56:53.746428   12892 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.224811  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:54 kubernetes-upgrade-104742 kubelet[12901]: E0114 10:56:54.504240   12901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.225164  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:55 kubernetes-upgrade-104742 kubelet[12911]: E0114 10:56:55.249839   12911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.225510  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:56 kubernetes-upgrade-104742 kubelet[12922]: E0114 10:56:56.003330   12922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.225873  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:56 kubernetes-upgrade-104742 kubelet[12932]: E0114 10:56:56.756370   12932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.226221  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:57 kubernetes-upgrade-104742 kubelet[12941]: E0114 10:56:57.501513   12941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.226573  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:58 kubernetes-upgrade-104742 kubelet[12952]: E0114 10:56:58.276122   12952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.226920  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:58 kubernetes-upgrade-104742 kubelet[12962]: E0114 10:56:58.994819   12962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.227277  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:59 kubernetes-upgrade-104742 kubelet[12972]: E0114 10:56:59.747758   12972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.227620  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:00 kubernetes-upgrade-104742 kubelet[12981]: E0114 10:57:00.491874   12981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.227982  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:01 kubernetes-upgrade-104742 kubelet[12992]: E0114 10:57:01.238429   12992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.228331  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:01 kubernetes-upgrade-104742 kubelet[13002]: E0114 10:57:01.990521   13002 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.228676  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:02 kubernetes-upgrade-104742 kubelet[13013]: E0114 10:57:02.737359   13013 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.229021  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:03 kubernetes-upgrade-104742 kubelet[13025]: E0114 10:57:03.484814   13025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.229376  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:04 kubernetes-upgrade-104742 kubelet[13035]: E0114 10:57:04.235719   13035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.229730  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:05 kubernetes-upgrade-104742 kubelet[13046]: E0114 10:57:05.005556   13046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.230076  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:05 kubernetes-upgrade-104742 kubelet[13056]: E0114 10:57:05.773972   13056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.230424  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:06 kubernetes-upgrade-104742 kubelet[13067]: E0114 10:57:06.495256   13067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.230773  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:07 kubernetes-upgrade-104742 kubelet[13079]: E0114 10:57:07.240391   13079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.231116  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:07 kubernetes-upgrade-104742 kubelet[13090]: E0114 10:57:07.993801   13090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.231470  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:08 kubernetes-upgrade-104742 kubelet[13100]: E0114 10:57:08.738803   13100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.231901  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:09 kubernetes-upgrade-104742 kubelet[13111]: E0114 10:57:09.491099   13111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.232262  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:10 kubernetes-upgrade-104742 kubelet[13122]: E0114 10:57:10.236116   13122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.232607  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:10 kubernetes-upgrade-104742 kubelet[13133]: E0114 10:57:10.993398   13133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.232951  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:11 kubernetes-upgrade-104742 kubelet[13144]: E0114 10:57:11.741062   13144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.233308  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:12 kubernetes-upgrade-104742 kubelet[13155]: E0114 10:57:12.497453   13155 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.233669  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:13 kubernetes-upgrade-104742 kubelet[13165]: E0114 10:57:13.239621   13165 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.234015  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:13 kubernetes-upgrade-104742 kubelet[13176]: E0114 10:57:13.994697   13176 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.234366  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:14 kubernetes-upgrade-104742 kubelet[13185]: E0114 10:57:14.740831   13185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.234743  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:15 kubernetes-upgrade-104742 kubelet[13196]: E0114 10:57:15.493497   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.235092  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:16 kubernetes-upgrade-104742 kubelet[13207]: E0114 10:57:16.244127   13207 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.235450  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:16 kubernetes-upgrade-104742 kubelet[13217]: E0114 10:57:16.987479   13217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.235819  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:17 kubernetes-upgrade-104742 kubelet[13228]: E0114 10:57:17.738714   13228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.235936  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:57:18.235952  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:57:18.253006  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:57:18.253035  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:57:18.307257  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:57:18.307290  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:57:18.307303  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:57:18.364637  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:57:18.364672  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0114 10:57:18.391231  189998 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 10:57:18.391270  189998 out.go:239] * 
	* 
	W0114 10:57:18.391443  189998 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:57:18.391466  189998 out.go:239] * 
	* 
	W0114 10:57:18.392419  189998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:57:18.395348  189998 out.go:177] X Problems detected in kubelet:
	I0114 10:57:18.396939  189998 out.go:177]   Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12529]: E0114 10:56:28.256460   12529 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.398446  189998 out.go:177]   Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12539]: E0114 10:56:28.983861   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.399845  189998 out.go:177]   Jan 14 10:56:29 kubernetes-upgrade-104742 kubelet[12551]: E0114 10:56:29.737789   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.403546  189998 out.go:177] 
	W0114 10:57:18.405206  189998 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:57:18.405335  189998 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 10:57:18.405403  189998 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 10:57:18.407110  189998 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-104742 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-104742 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-104742 version --output=json: exit status 1 (51.978335ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "26",
	    "gitVersion": "v1.26.0",
	    "gitCommit": "b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d",
	    "gitTreeState": "clean",
	    "buildDate": "2022-12-08T19:58:30Z",
	    "goVersion": "go1.19.4",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.7"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-01-14 10:57:18.866167416 +0000 UTC m=+3084.972198026
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-104742
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-104742:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35d0481c042eebfbc2ce41ea3f6ffab572ea6807436d7458bebe8b75a363637e",
	        "Created": "2023-01-14T10:47:54.17898881Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190497,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:48:39.983285862Z",
	            "FinishedAt": "2023-01-14T10:48:38.57736201Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/35d0481c042eebfbc2ce41ea3f6ffab572ea6807436d7458bebe8b75a363637e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35d0481c042eebfbc2ce41ea3f6ffab572ea6807436d7458bebe8b75a363637e/hostname",
	        "HostsPath": "/var/lib/docker/containers/35d0481c042eebfbc2ce41ea3f6ffab572ea6807436d7458bebe8b75a363637e/hosts",
	        "LogPath": "/var/lib/docker/containers/35d0481c042eebfbc2ce41ea3f6ffab572ea6807436d7458bebe8b75a363637e/35d0481c042eebfbc2ce41ea3f6ffab572ea6807436d7458bebe8b75a363637e-json.log",
	        "Name": "/kubernetes-upgrade-104742",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-104742:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-104742",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5de3f9629596a4272a7d62651092e72a3b9c5b38d5d63a4feea978ebda0cc977-init/diff:/var/lib/docker/overlay2/cfa67474dfffbd23c875ed1363951467d9d88e2b76451e5643f2505208741f3b/diff:/var/lib/docker/overlay2/073ec06077c9f139927a68d24e4f683141baf9acf954f7927a62d439b8e24069/diff:/var/lib/docker/overlay2/100e369464b40a65b67d4855b5a41f41832f93605f574ff35657d9b2d0ee5b4f/diff:/var/lib/docker/overlay2/e2f9a50fd4c46aeeaf52dd5d2c45c5548e516eaa4949cae4e8f8be3dda02e560/diff:/var/lib/docker/overlay2/6d3b34d6067ad9d3ff171a32fea0902c6748df9aeb5a46e12971cdc70934e200/diff:/var/lib/docker/overlay2/44f244a49f3260ebade676a0e6177935228bcd4504617609ee4343aa284e724c/diff:/var/lib/docker/overlay2/1cba83561484d9f781c67421553c95b75266d2217256379d5787e510ac28483f/diff:/var/lib/docker/overlay2/9ec5ab0f595877fa3d60d26e7aa243026d8b45fea861a3e12c469d81ab1ffe6c/diff:/var/lib/docker/overlay2/30d22319caaa0760daf22d54c95076cad3b970afb61aa7c018ac37b623117613/diff:/var/lib/docker/overlay2/1f5756
3ce3807317a405416fbe25b96e16e33708f4f97020c4f82e1e2b4da5ed/diff:/var/lib/docker/overlay2/604bdff9bf4c8bdcc970ae4f7e8734a5aa27c04fb328f61dea00c3740f12daba/diff:/var/lib/docker/overlay2/03f7c27604538c82d3d43dfde85aa33dc8f2658b93b51f500b27edd3b1aaed98/diff:/var/lib/docker/overlay2/f9ceccc940eb08b69d102c744810d1aff5795c7e9a58c20d43ca6857fa21b8ea/diff:/var/lib/docker/overlay2/576f7412e6f61feeea74cdfbae850007513e8aa407ce5e45f903c70ce2f89fe5/diff:/var/lib/docker/overlay2/958517a359371ca3276a50323466f96ec3d5d7687cb2f26c287a9a343fcbcd20/diff:/var/lib/docker/overlay2/c09247966342dd284c940bcd881b6187476a63e53055e9f378aaa25ceaa86263/diff:/var/lib/docker/overlay2/85bda0ea7bf5a8c05a6eb175b445c71a710e3e392fc1b70957e3902cec94586f/diff:/var/lib/docker/overlay2/7cde8ffb6999e9d99ff44b83daaf1a781dd6546a7a96eda5b901e88658c78f74/diff:/var/lib/docker/overlay2/92d42128dacdf015e3ce466b8e365093147199e2fffcda0192857efed322565f/diff:/var/lib/docker/overlay2/0f2dff826ddc5a3be056ecb8791656438fd8d9122e0bfa4bf808ff640ddd0366/diff:/var/lib/d
ocker/overlay2/44a9089aeee67c883a076dc1940e80698f487176c3d197f321518402ce7a4467/diff:/var/lib/docker/overlay2/6068fe71ba149c31fa6947b978b0755f11f334f9d40e14b5c9946cf9a103ca68/diff:/var/lib/docker/overlay2/adb5ed5619948c4b7e4d83048cd96cc3d6ded2ae453b67da2e120f4ada989e97/diff:/var/lib/docker/overlay2/d633ebbd9eed2900d2e31406be983b7d21e70ac3c07593de38c5cfb0628275ae/diff:/var/lib/docker/overlay2/87f4a27d0733b1bdf23169c5079f854d115bfd926c76a346d28259b8f2abc0f9/diff:/var/lib/docker/overlay2/4b514ac9d0ce1d6bff4ec77673304888b5a45fca7d9a52d872475d70a4bad242/diff:/var/lib/docker/overlay2/76f964a17c8531bd97500c5bf3aa0b003b317ad1c055c0d1c475d41666734b75/diff:/var/lib/docker/overlay2/0a0f3b972da362a17d673ffdcd0d42b3663faeed5e799b2b38868036d5cd1533/diff:/var/lib/docker/overlay2/a07c41d799979e1f64f7bf3d0bcd9a98b724ebea06eafa1a01b83c71c76f9d3c/diff:/var/lib/docker/overlay2/0be1fd774bf851dd17c525a17f8a015aa3c0f1f71b29033666a62cd2be3a495f/diff:/var/lib/docker/overlay2/62db7acc5b1cb93b6e26eb5c826b67cebb252c079fd5a060ba843227c91
c864f/diff:/var/lib/docker/overlay2/076dea682ce5421a9c145f8038044bf438f06c3635406efdf60ef350f109389f/diff:/var/lib/docker/overlay2/143de4d69dc548610d4e281cfb14bf70d7ed81172bee212fc15755591dea37b4/diff:/var/lib/docker/overlay2/89ecf87d7b563ffa220047c3bb13c7ea55ebb215cbd3d2731d795ce559d5b9b4/diff:/var/lib/docker/overlay2/e9f8c0a087f0832425535d00100392d8b267181825a52ae7291fb7fe7ab62614/diff:/var/lib/docker/overlay2/66fb715c26be36afdfe15f9e2562f7320c04421f7bff30da6424afc0395d1f19/diff:/var/lib/docker/overlay2/24d5a6709af6741b4216757263798c2fd2ffbe83a81f68619cd00e2107b4ff3d/diff:/var/lib/docker/overlay2/865a5915817b4d31f71061a418fcc1c284ee124c9b3a275c3676cb2b3fba32dd/diff:/var/lib/docker/overlay2/b33545ce05c040395c79c17ae2fc9b23755b589f9f6e2f94121abe1cc5c2869c/diff:/var/lib/docker/overlay2/22f66646b2dde6f03ac24f5affc8a43db7aaae6b2e9677ae4cf9e607238761e4/diff:/var/lib/docker/overlay2/789c281f8e044ab343c9800dc7431b8fbaf616ecd3419979e8a3dfbb605f8efe/diff:/var/lib/docker/overlay2/6dd50d303cdaa1e2fa047ed92b16580d8b0c2c
77552b9a13e0c356884add5310/diff:/var/lib/docker/overlay2/b1d8d5816bce1b48db468539e1bc343a7c87dee89fb1783174081611a7e0b2ee/diff:/var/lib/docker/overlay2/529b543dd76f6ad1b33944f7c0767adca9befb5d162c4c1bf13756f3c0048fb4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5de3f9629596a4272a7d62651092e72a3b9c5b38d5d63a4feea978ebda0cc977/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5de3f9629596a4272a7d62651092e72a3b9c5b38d5d63a4feea978ebda0cc977/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5de3f9629596a4272a7d62651092e72a3b9c5b38d5d63a4feea978ebda0cc977/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-104742",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-104742/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-104742",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-104742",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-104742",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d3532dbaf1765893cde57c6948276b0e932b406222d64f71b85b7d9a0ae1cf0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7d3532dbaf17",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-104742": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "35d0481c042e",
	                        "kubernetes-upgrade-104742"
	                    ],
	                    "NetworkID": "cf593cd02a0a43989f9f3cd8a3f3b8906354ee4d81236d2186944aefb66ef09b",
	                    "EndpointID": "64e4177c955cb1638b104e50ed6c4e27c75afca0ff6d70841c5a556ac3d7adbb",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-104742 -n kubernetes-upgrade-104742
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-104742 -n kubernetes-upgrade-104742: exit status 2 (422.466825ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-104742 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-104807            | old-k8s-version-104807       | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:50 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104807                                  | old-k8s-version-104807       | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:50 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104807                 | old-k8s-version-104807       | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:50 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104807                                  | old-k8s-version-104807       | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-104947                 | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:50 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-104947                                       | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:51 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-105009                | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:51 UTC | 14 Jan 23 10:51 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-105009                                      | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:51 UTC | 14 Jan 23 10:51 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-104947                      | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:51 UTC | 14 Jan 23 10:51 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-104947                                       | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:51 UTC | 14 Jan 23 10:56 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-105009                     | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:51 UTC | 14 Jan 23 10:51 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-105009                                      | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:51 UTC | 14 Jan 23 10:56 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p no-preload-104947 sudo                                  | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p no-preload-104947                                       | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p no-preload-104947                                       | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p no-preload-104947                                       | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	| delete  | -p no-preload-104947                                       | no-preload-104947            | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	| delete  | -p                                                         | disable-driver-mounts-105641 | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | disable-driver-mounts-105641                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-105641 | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC |                     |
	|         | default-k8s-diff-port-105641                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-105009 sudo                                 | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-105009                                      | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-105009                                      | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-105009                                      | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	| delete  | -p embed-certs-105009                                      | embed-certs-105009           | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	| start   | -p newest-cni-105657 --memory=2200 --alsologtostderr       | newest-cni-105657            | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:56:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:56:57.411855  247733 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:56:57.412156  247733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:56:57.412171  247733 out.go:309] Setting ErrFile to fd 2...
	I0114 10:56:57.412178  247733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:56:57.412439  247733 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:56:57.413563  247733 out.go:303] Setting JSON to false
	I0114 10:56:57.414909  247733 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5965,"bootTime":1673687853,"procs":551,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:56:57.414982  247733 start.go:135] virtualization: kvm guest
	I0114 10:56:57.417522  247733 out.go:177] * [newest-cni-105657] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:56:57.418950  247733 notify.go:220] Checking for updates...
	I0114 10:56:57.420377  247733 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:56:57.421789  247733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:56:57.423347  247733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:56:57.424830  247733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:56:57.426373  247733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:56:57.428322  247733 config.go:180] Loaded profile config "default-k8s-diff-port-105641": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:56:57.428446  247733 config.go:180] Loaded profile config "kubernetes-upgrade-104742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:56:57.428568  247733 config.go:180] Loaded profile config "old-k8s-version-104807": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0114 10:56:57.428627  247733 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:56:57.466699  247733 docker.go:138] docker version: linux-20.10.22
	I0114 10:56:57.466821  247733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:56:57.586230  247733 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-14 10:56:57.492619271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:56:57.586335  247733 docker.go:255] overlay module found
	I0114 10:56:57.588766  247733 out.go:177] * Using the docker driver based on user configuration
	I0114 10:56:57.590116  247733 start.go:294] selected driver: docker
	I0114 10:56:57.590144  247733 start.go:838] validating driver "docker" against <nil>
	I0114 10:56:57.590169  247733 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:56:57.591143  247733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:56:57.701738  247733 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-14 10:56:57.612706663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:56:57.701950  247733 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0114 10:56:57.701985  247733 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0114 10:56:57.702207  247733 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0114 10:56:57.704939  247733 out.go:177] * Using Docker driver with root privileges
	I0114 10:56:57.706296  247733 cni.go:95] Creating CNI manager for ""
	I0114 10:56:57.706322  247733 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:56:57.706340  247733 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0114 10:56:57.706355  247733 start_flags.go:319] config:
	{Name:newest-cni-105657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-105657 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet
/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:56:57.708022  247733 out.go:177] * Starting control plane node newest-cni-105657 in cluster newest-cni-105657
	I0114 10:56:57.709397  247733 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:56:57.710890  247733 out.go:177] * Pulling base image ...
	I0114 10:56:57.712276  247733 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:56:57.712311  247733 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:56:57.712318  247733 cache.go:57] Caching tarball of preloaded images
	I0114 10:56:57.712378  247733 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:56:57.712577  247733 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:56:57.712592  247733 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:56:57.712690  247733 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/config.json ...
	I0114 10:56:57.712716  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/config.json: {Name:mk1536fc08ff3d8b6ce3cb35794f6d2065b68d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:56:57.742006  247733 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:56:57.742041  247733 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:56:57.742060  247733 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:56:57.742104  247733 start.go:364] acquiring machines lock for newest-cni-105657: {Name:mkb0325a9b8bc94f5f2a1e2e6ef054a512b6bd05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:56:57.742241  247733 start.go:368] acquired machines lock for "newest-cni-105657" in 113.242µs
	I0114 10:56:57.742273  247733 start.go:93] Provisioning new machine with config: &{Name:newest-cni-105657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-105657 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:56:57.742400  247733 start.go:125] createHost starting for "" (driver="docker")
	I0114 10:56:53.363207  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:56:53.363241  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:56:53.363249  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:56:53.363257  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:56:53.363267  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:56:53.363277  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:56:53.363294  208230 retry.go:31] will retry after 1.109165795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:56:54.478132  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:56:54.478167  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:56:54.478175  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:56:54.478182  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:56:54.478192  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:56:54.478200  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:56:54.478223  208230 retry.go:31] will retry after 1.54277181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:56:56.025488  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:56:56.025518  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:56:56.025527  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:56:56.025534  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:56:56.025544  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:56:56.025555  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:56:56.025573  208230 retry.go:31] will retry after 2.200241603s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:56:57.745503  247733 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0114 10:56:57.745818  247733 start.go:159] libmachine.API.Create for "newest-cni-105657" (driver="docker")
	I0114 10:56:57.745855  247733 client.go:168] LocalClient.Create starting
	I0114 10:56:57.745949  247733 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem
	I0114 10:56:57.745991  247733 main.go:134] libmachine: Decoding PEM data...
	I0114 10:56:57.746017  247733 main.go:134] libmachine: Parsing certificate...
	I0114 10:56:57.746086  247733 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem
	I0114 10:56:57.746111  247733 main.go:134] libmachine: Decoding PEM data...
	I0114 10:56:57.746128  247733 main.go:134] libmachine: Parsing certificate...
	I0114 10:56:57.746475  247733 cli_runner.go:164] Run: docker network inspect newest-cni-105657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 10:56:57.775188  247733 cli_runner.go:211] docker network inspect newest-cni-105657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 10:56:57.775257  247733 network_create.go:280] running [docker network inspect newest-cni-105657] to gather additional debugging logs...
	I0114 10:56:57.775279  247733 cli_runner.go:164] Run: docker network inspect newest-cni-105657
	W0114 10:56:57.802088  247733 cli_runner.go:211] docker network inspect newest-cni-105657 returned with exit code 1
	I0114 10:56:57.802131  247733 network_create.go:283] error running [docker network inspect newest-cni-105657]: docker network inspect newest-cni-105657: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-105657
	I0114 10:56:57.802145  247733 network_create.go:285] output of [docker network inspect newest-cni-105657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-105657
	
	** /stderr **
	I0114 10:56:57.802182  247733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:56:57.827003  247733 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fb325bb87cdc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ae:d7:be:f3}}
	I0114 10:56:57.828120  247733 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8d666bf786b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:59:8a:0a:b1}}
	I0114 10:56:57.829694  247733 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf593cd02a0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:b7:ae:53:2b}}
	I0114 10:56:57.831906  247733 network.go:277] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0009d4020] misses:0}
	I0114 10:56:57.831965  247733 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 10:56:57.831984  247733 network_create.go:123] attempt to create docker network newest-cni-105657 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0114 10:56:57.832044  247733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-105657 newest-cni-105657
	I0114 10:56:57.902346  247733 network_create.go:107] docker network newest-cni-105657 192.168.76.0/24 created
	I0114 10:56:57.902381  247733 kic.go:117] calculated static IP "192.168.76.2" for the "newest-cni-105657" container
	I0114 10:56:57.902445  247733 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 10:56:57.930703  247733 cli_runner.go:164] Run: docker volume create newest-cni-105657 --label name.minikube.sigs.k8s.io=newest-cni-105657 --label created_by.minikube.sigs.k8s.io=true
	I0114 10:56:57.960854  247733 oci.go:103] Successfully created a docker volume newest-cni-105657
	I0114 10:56:57.960950  247733 cli_runner.go:164] Run: docker run --rm --name newest-cni-105657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-105657 --entrypoint /usr/bin/test -v newest-cni-105657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 10:56:58.573196  247733 oci.go:107] Successfully prepared a docker volume newest-cni-105657
	I0114 10:56:58.573251  247733 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:56:58.573276  247733 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 10:56:58.573349  247733 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-105657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 10:56:58.230180  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:56:58.230215  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:56:58.230223  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:56:58.230230  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:56:58.230241  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:56:58.230249  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:56:58.230268  208230 retry.go:31] will retry after 2.087459713s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:57:00.321851  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:57:00.321888  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:57:00.321896  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:57:00.321902  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:57:00.321912  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:57:00.321920  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:57:00.321938  208230 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:57:02.941600  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:57:02.941635  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:57:02.941644  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:57:02.941651  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:57:02.941662  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:57:02.941669  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:57:02.941687  208230 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:57:05.418973  242234 kubeadm.go:317] [apiclient] All control plane components are healthy after 9.502693 seconds
	I0114 10:57:05.419114  242234 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 10:57:05.428993  242234 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 10:57:05.945429  242234 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 10:57:05.945709  242234 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-105641 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 10:57:06.454376  242234 kubeadm.go:317] [bootstrap-token] Using token: nkbs25.81y4ll5rjp9ab04g
	I0114 10:57:06.456084  242234 out.go:204]   - Configuring RBAC rules ...
	I0114 10:57:06.456228  242234 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0114 10:57:06.459066  242234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0114 10:57:06.464321  242234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0114 10:57:06.466515  242234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0114 10:57:06.468574  242234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0114 10:57:06.470530  242234 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0114 10:57:06.479220  242234 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0114 10:57:06.640076  242234 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0114 10:57:06.864054  242234 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0114 10:57:06.865112  242234 kubeadm.go:317] 
	I0114 10:57:06.865208  242234 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0114 10:57:06.865217  242234 kubeadm.go:317] 
	I0114 10:57:06.865302  242234 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0114 10:57:06.865308  242234 kubeadm.go:317] 
	I0114 10:57:06.865337  242234 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0114 10:57:06.865705  242234 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0114 10:57:06.865779  242234 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0114 10:57:06.865788  242234 kubeadm.go:317] 
	I0114 10:57:06.865851  242234 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0114 10:57:06.865858  242234 kubeadm.go:317] 
	I0114 10:57:06.865914  242234 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0114 10:57:06.865921  242234 kubeadm.go:317] 
	I0114 10:57:06.865981  242234 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0114 10:57:06.866065  242234 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0114 10:57:06.866152  242234 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0114 10:57:06.866157  242234 kubeadm.go:317] 
	I0114 10:57:06.866491  242234 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0114 10:57:06.866595  242234 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0114 10:57:06.866605  242234 kubeadm.go:317] 
	I0114 10:57:06.866877  242234 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token nkbs25.81y4ll5rjp9ab04g \
	I0114 10:57:06.866992  242234 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 \
	I0114 10:57:06.867020  242234 kubeadm.go:317] 	--control-plane 
	I0114 10:57:06.867026  242234 kubeadm.go:317] 
	I0114 10:57:06.867119  242234 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0114 10:57:06.867127  242234 kubeadm.go:317] 
	I0114 10:57:06.867209  242234 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token nkbs25.81y4ll5rjp9ab04g \
	I0114 10:57:06.867288  242234 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 
	I0114 10:57:06.870659  242234 kubeadm.go:317] W0114 10:56:52.672739     735 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:57:06.870937  242234 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:57:06.871056  242234 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:57:06.871081  242234 cni.go:95] Creating CNI manager for ""
	I0114 10:57:06.871091  242234 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:57:06.872897  242234 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 10:57:06.874316  242234 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 10:57:06.877783  242234 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:57:06.877803  242234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 10:57:06.928729  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:57:04.645502  247733 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-105657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.072058717s)
	I0114 10:57:04.645541  247733 kic.go:199] duration metric: took 6.072259 seconds to extract preloaded images to volume
	W0114 10:57:04.645693  247733 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0114 10:57:04.645811  247733 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 10:57:04.774946  247733 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-105657 --name newest-cni-105657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-105657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-105657 --network newest-cni-105657 --ip 192.168.76.2 --volume newest-cni-105657:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 10:57:05.194649  247733 cli_runner.go:164] Run: docker container inspect newest-cni-105657 --format={{.State.Running}}
	I0114 10:57:05.222117  247733 cli_runner.go:164] Run: docker container inspect newest-cni-105657 --format={{.State.Status}}
	I0114 10:57:05.247066  247733 cli_runner.go:164] Run: docker exec newest-cni-105657 stat /var/lib/dpkg/alternatives/iptables
	I0114 10:57:05.295888  247733 oci.go:144] the created container "newest-cni-105657" has a running status.
	I0114 10:57:05.295932  247733 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa...
	I0114 10:57:05.543584  247733 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 10:57:05.622037  247733 cli_runner.go:164] Run: docker container inspect newest-cni-105657 --format={{.State.Status}}
	I0114 10:57:05.654109  247733 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 10:57:05.654134  247733 kic_runner.go:114] Args: [docker exec --privileged newest-cni-105657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 10:57:05.738146  247733 cli_runner.go:164] Run: docker container inspect newest-cni-105657 --format={{.State.Status}}
	I0114 10:57:05.778147  247733 machine.go:88] provisioning docker machine ...
	I0114 10:57:05.778193  247733 ubuntu.go:169] provisioning hostname "newest-cni-105657"
	I0114 10:57:05.778259  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:05.807275  247733 main.go:134] libmachine: Using SSH client type: native
	I0114 10:57:05.807473  247733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33017 <nil> <nil>}
	I0114 10:57:05.807492  247733 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-105657 && echo "newest-cni-105657" | sudo tee /etc/hostname
	I0114 10:57:05.941949  247733 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-105657
	
	I0114 10:57:05.942031  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:05.967042  247733 main.go:134] libmachine: Using SSH client type: native
	I0114 10:57:05.967190  247733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33017 <nil> <nil>}
	I0114 10:57:05.967208  247733 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-105657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-105657/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-105657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:57:06.083557  247733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:57:06.083591  247733 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:57:06.083623  247733 ubuntu.go:177] setting up certificates
	I0114 10:57:06.083637  247733 provision.go:83] configureAuth start
	I0114 10:57:06.083769  247733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-105657
	I0114 10:57:06.107933  247733 provision.go:138] copyHostCerts
	I0114 10:57:06.108004  247733 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:57:06.108014  247733 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:57:06.108081  247733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:57:06.108154  247733 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:57:06.108163  247733 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:57:06.108185  247733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:57:06.108230  247733 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:57:06.108237  247733 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:57:06.108256  247733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:57:06.108295  247733 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.newest-cni-105657 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-105657]
	I0114 10:57:06.553689  247733 provision.go:172] copyRemoteCerts
	I0114 10:57:06.553746  247733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:57:06.553777  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:06.581483  247733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa Username:docker}
	I0114 10:57:06.668161  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:57:06.687407  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0114 10:57:06.707341  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 10:57:06.728096  247733 provision.go:86] duration metric: configureAuth took 644.441062ms
	I0114 10:57:06.728128  247733 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:57:06.728337  247733 config.go:180] Loaded profile config "newest-cni-105657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:57:06.728352  247733 machine.go:91] provisioned docker machine in 950.181859ms
	I0114 10:57:06.728358  247733 client.go:171] LocalClient.Create took 8.98249838s
	I0114 10:57:06.728378  247733 start.go:167] duration metric: libmachine.API.Create for "newest-cni-105657" took 8.982562636s
	I0114 10:57:06.728393  247733 start.go:300] post-start starting for "newest-cni-105657" (driver="docker")
	I0114 10:57:06.728402  247733 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:57:06.728454  247733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:57:06.728501  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:06.761700  247733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa Username:docker}
	I0114 10:57:06.852745  247733 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:57:06.856298  247733 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:57:06.856327  247733 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:57:06.856338  247733 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:57:06.856345  247733 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:57:06.856356  247733 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:57:06.856430  247733 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:57:06.856547  247733 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:57:06.856643  247733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:57:06.864730  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:57:06.884785  247733 start.go:303] post-start completed in 156.376474ms
	I0114 10:57:06.885147  247733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-105657
	I0114 10:57:06.911384  247733 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/config.json ...
	I0114 10:57:06.911752  247733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:57:06.911807  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:06.940903  247733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa Username:docker}
	I0114 10:57:07.028522  247733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:57:07.032555  247733 start.go:128] duration metric: createHost completed in 9.290138453s
	I0114 10:57:07.032579  247733 start.go:83] releasing machines lock for "newest-cni-105657", held for 9.290320794s
	I0114 10:57:07.032773  247733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-105657
	I0114 10:57:07.062534  247733 ssh_runner.go:195] Run: cat /version.json
	I0114 10:57:07.062587  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:07.062629  247733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:57:07.062679  247733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-105657
	I0114 10:57:07.093636  247733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa Username:docker}
	I0114 10:57:07.094550  247733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/newest-cni-105657/id_rsa Username:docker}
	I0114 10:57:07.216223  247733 ssh_runner.go:195] Run: systemctl --version
	I0114 10:57:07.220126  247733 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:57:07.231856  247733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:57:07.244373  247733 docker.go:189] disabling docker service ...
	I0114 10:57:07.244429  247733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:57:07.260681  247733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:57:07.270195  247733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:57:07.356034  247733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:57:07.433477  247733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:57:07.443377  247733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:57:07.456448  247733 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:57:07.464402  247733 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:57:07.473575  247733 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:57:07.481644  247733 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0114 10:57:07.490790  247733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:57:07.497494  247733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:57:07.510866  247733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:57:07.604024  247733 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:57:07.683986  247733 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:57:07.684055  247733 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:57:07.689272  247733 start.go:472] Will wait 60s for crictl version
	I0114 10:57:07.689321  247733 ssh_runner.go:195] Run: which crictl
	I0114 10:57:07.692616  247733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:57:07.722650  247733 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:57:07.722712  247733 ssh_runner.go:195] Run: containerd --version
	I0114 10:57:07.752640  247733 ssh_runner.go:195] Run: containerd --version
	I0114 10:57:07.780856  247733 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:57:07.782296  247733 cli_runner.go:164] Run: docker network inspect newest-cni-105657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:57:07.813520  247733 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0114 10:57:07.816859  247733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:57:07.828537  247733 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0114 10:57:07.044827  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:57:07.044859  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:57:07.044869  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:57:07.044876  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:57:07.044885  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:57:07.044890  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:57:07.044908  208230 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:57:07.829998  247733 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:57:07.830084  247733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:57:07.857070  247733 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:57:07.857100  247733 containerd.go:467] Images already preloaded, skipping extraction
	I0114 10:57:07.857154  247733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:57:07.886250  247733 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:57:07.886273  247733 cache_images.go:84] Images are preloaded, skipping loading
	I0114 10:57:07.886321  247733 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:57:07.912923  247733 cni.go:95] Creating CNI manager for ""
	I0114 10:57:07.912947  247733 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:57:07.912961  247733 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0114 10:57:07.912983  247733 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-105657 NodeName:newest-cni-105657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feat
ureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:57:07.913148  247733 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-105657"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:57:07.913251  247733 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-105657 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-105657 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:57:07.913314  247733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:57:07.920491  247733 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:57:07.920555  247733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:57:07.927851  247733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (547 bytes)
	I0114 10:57:07.943500  247733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:57:07.959050  247733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0114 10:57:07.973891  247733 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:57:07.977035  247733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:57:07.988427  247733 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657 for IP: 192.168.76.2
	I0114 10:57:07.988555  247733 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:57:07.988611  247733 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:57:07.988675  247733 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/client.key
	I0114 10:57:07.988693  247733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/client.crt with IP's: []
	I0114 10:57:08.120957  247733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/client.crt ...
	I0114 10:57:08.120987  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/client.crt: {Name:mkd5106cb822d7bdf9c2a466535e90beec6c4139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:08.121208  247733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/client.key ...
	I0114 10:57:08.121225  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/client.key: {Name:mk75a0a2b4eaed89187eeb921aa89d69bea1b87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:08.121359  247733 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.key.31bdca25
	I0114 10:57:08.121378  247733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 10:57:08.288083  247733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.crt.31bdca25 ...
	I0114 10:57:08.288117  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.crt.31bdca25: {Name:mkd2e44b58043c4cdee9ed45f50d99d6b06bb26f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:08.288322  247733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.key.31bdca25 ...
	I0114 10:57:08.288338  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.key.31bdca25: {Name:mkf58b9b901d639b3d8a8ccaaf29d0b043be67d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:08.288451  247733 certs.go:320] copying /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.crt
	I0114 10:57:08.288532  247733 certs.go:324] copying /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.key
	I0114 10:57:08.288598  247733 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.key
	I0114 10:57:08.288616  247733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.crt with IP's: []
	I0114 10:57:08.494870  247733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.crt ...
	I0114 10:57:08.494896  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.crt: {Name:mkd2f495ea46ce6794acd00fc2417eca90b31cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:08.495105  247733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.key ...
	I0114 10:57:08.495126  247733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.key: {Name:mk18e8fd41a765be6b3319fb4549c787bedf8ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:08.495371  247733 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:57:08.495422  247733 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:57:08.495440  247733 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:57:08.495472  247733 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:57:08.495502  247733 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:57:08.495531  247733 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:57:08.495583  247733 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:57:08.496201  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:57:08.514917  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:57:08.534952  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:57:08.552382  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/newest-cni-105657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:57:08.570954  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:57:08.589535  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:57:08.606841  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:57:08.624349  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:57:08.642262  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:57:08.660440  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:57:08.678653  247733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:57:08.698599  247733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:57:08.713490  247733 ssh_runner.go:195] Run: openssl version
	I0114 10:57:08.718925  247733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:57:08.726823  247733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:57:08.730270  247733 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:57:08.730330  247733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:57:08.735593  247733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:57:08.743448  247733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:57:08.750769  247733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:57:08.753939  247733 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:57:08.753994  247733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:57:08.758607  247733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:57:08.765990  247733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:57:08.773581  247733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:57:08.776548  247733 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:57:08.776586  247733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:57:08.781362  247733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:57:08.788900  247733 kubeadm.go:396] StartCluster: {Name:newest-cni-105657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-105657 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:57:08.788977  247733 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:57:08.789010  247733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:57:08.813385  247733 cri.go:87] found id: ""
	I0114 10:57:08.813455  247733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:57:08.820671  247733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:57:08.827466  247733 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:57:08.827512  247733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:57:08.834458  247733 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:57:08.834515  247733 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:57:08.878822  247733 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 10:57:08.878886  247733 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:57:08.908732  247733 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:57:08.908812  247733 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:57:08.908886  247733 kubeadm.go:317] OS: Linux
	I0114 10:57:08.908960  247733 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:57:08.909014  247733 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:57:08.909081  247733 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:57:08.909141  247733 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:57:08.909220  247733 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:57:08.909293  247733 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:57:08.909353  247733 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:57:08.909403  247733 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:57:08.909474  247733 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:57:08.976908  247733 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 10:57:08.977050  247733 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 10:57:08.977208  247733 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 10:57:09.105407  247733 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:57:09.108270  247733 out.go:204]   - Generating certificates and keys ...
	I0114 10:57:09.108387  247733 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 10:57:09.108459  247733 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 10:57:09.163873  247733 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 10:57:09.367373  247733 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 10:57:09.471082  247733 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 10:57:09.643483  247733 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 10:57:10.041009  247733 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 10:57:10.041195  247733 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-105657] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0114 10:57:10.274975  247733 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 10:57:10.275115  247733 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-105657] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0114 10:57:10.400778  247733 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 10:57:10.522862  247733 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 10:57:10.646383  247733 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 10:57:10.646503  247733 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:57:10.832447  247733 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 10:57:11.071572  247733 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 10:57:11.226542  247733 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:57:11.442324  247733 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:57:11.455192  247733 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:57:11.456246  247733 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:57:11.456313  247733 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 10:57:11.548576  247733 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:57:07.763605  242234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:57:07.763769  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:07.763779  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=default-k8s-diff-port-105641 minikube.k8s.io/updated_at=2023_01_14T10_57_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:07.871340  242234 ops.go:34] apiserver oom_adj: -16
	I0114 10:57:07.871453  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:08.466133  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:08.966044  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:09.466266  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:09.965610  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:10.466337  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:10.965534  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:11.465775  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:11.550841  247733 out.go:204]   - Booting up control plane ...
	I0114 10:57:11.550958  247733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:57:11.552540  247733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:57:11.553744  247733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:57:11.554629  247733 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:57:11.556814  247733 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 10:57:10.930288  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:57:10.930322  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:57:10.930330  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:57:10.930337  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:57:10.930347  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:57:10.930355  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:57:10.930373  208230 retry.go:31] will retry after 6.722686426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:57:11.965790  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:12.465618  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:12.966378  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:13.466220  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:13.966429  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:14.465870  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:14.966446  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:15.466448  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:15.966421  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:16.465678  242234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:57:17.984863  189998 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 10:57:17.985003  189998 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 10:57:17.987514  189998 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 10:57:17.987633  189998 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:57:17.987808  189998 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:57:17.987916  189998 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:57:17.987971  189998 kubeadm.go:317] OS: Linux
	I0114 10:57:17.988035  189998 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:57:17.988109  189998 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:57:17.988196  189998 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:57:17.988277  189998 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:57:17.988346  189998 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:57:17.988410  189998 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:57:17.988470  189998 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:57:17.988533  189998 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:57:17.988623  189998 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:57:17.988722  189998 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 10:57:17.988849  189998 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 10:57:17.988972  189998 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 10:57:17.989063  189998 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:57:17.990409  189998 out.go:204]   - Generating certificates and keys ...
	I0114 10:57:17.990497  189998 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 10:57:17.990593  189998 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 10:57:17.990696  189998 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 10:57:17.990786  189998 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 10:57:17.990894  189998 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 10:57:17.990967  189998 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 10:57:17.991056  189998 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 10:57:17.991142  189998 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 10:57:17.991256  189998 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 10:57:17.991355  189998 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 10:57:17.991414  189998 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 10:57:17.991478  189998 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:57:17.991521  189998 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 10:57:17.991560  189998 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 10:57:17.991612  189998 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:57:17.991654  189998 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:57:17.991820  189998 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:57:17.991933  189998 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:57:17.991986  189998 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 10:57:17.992063  189998 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:57:17.660323  208230 system_pods.go:86] 5 kube-system pods found
	I0114 10:57:17.660359  208230 system_pods.go:89] "coredns-5644d7b6d9-hgkp8" [778bdc20-2359-48c7-94c2-731feb129247] Running
	I0114 10:57:17.660368  208230 system_pods.go:89] "kindnet-4lz6c" [0ef65d02-742c-410e-9317-c7206b256e50] Running
	I0114 10:57:17.660375  208230 system_pods.go:89] "kube-proxy-w8srl" [feb1308b-9a8d-4abc-a4cd-666a2b60fcaf] Running
	I0114 10:57:17.660388  208230 system_pods.go:89] "metrics-server-7958775c-w9tnf" [9ea12726-b89b-4000-b9b8-2beeead64c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:57:17.660400  208230 system_pods.go:89] "storage-provisioner" [27d9671c-b5f5-4be4-979a-958773b8347e] Running
	I0114 10:57:17.660422  208230 retry.go:31] will retry after 7.804314206s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0114 10:57:17.993800  189998 out.go:204]   - Booting up control plane ...
	I0114 10:57:17.993901  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:57:17.993986  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:57:17.994069  189998 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:57:17.994175  189998 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:57:17.994339  189998 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 10:57:17.994382  189998 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 10:57:17.994456  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.994616  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.994695  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.994913  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995013  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.995168  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995250  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.995435  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995525  189998 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 10:57:17.995858  189998 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 10:57:17.995875  189998 kubeadm.go:317] 
	I0114 10:57:17.995932  189998 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 10:57:17.995992  189998 kubeadm.go:317] 	timed out waiting for the condition
	I0114 10:57:17.995998  189998 kubeadm.go:317] 
	I0114 10:57:17.996035  189998 kubeadm.go:317] This error is likely caused by:
	I0114 10:57:17.996097  189998 kubeadm.go:317] 	- The kubelet is not running
	I0114 10:57:17.996256  189998 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 10:57:17.996280  189998 kubeadm.go:317] 
	I0114 10:57:17.996416  189998 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 10:57:17.996462  189998 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 10:57:17.996502  189998 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 10:57:17.996517  189998 kubeadm.go:317] 
	I0114 10:57:17.996651  189998 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 10:57:17.996744  189998 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0114 10:57:17.996865  189998 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0114 10:57:17.996975  189998 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0114 10:57:17.997057  189998 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 10:57:17.997184  189998 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0114 10:57:17.997221  189998 kubeadm.go:398] StartCluster complete in 8m6.064712039s
	I0114 10:57:17.997262  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0114 10:57:17.997320  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0114 10:57:18.023728  189998 cri.go:87] found id: ""
	I0114 10:57:18.023751  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.023758  189998 logs.go:276] No container was found matching "kube-apiserver"
	I0114 10:57:18.023764  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0114 10:57:18.023819  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0114 10:57:18.049043  189998 cri.go:87] found id: ""
	I0114 10:57:18.049067  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.049085  189998 logs.go:276] No container was found matching "etcd"
	I0114 10:57:18.049092  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0114 10:57:18.049153  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0114 10:57:18.076100  189998 cri.go:87] found id: ""
	I0114 10:57:18.076133  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.076140  189998 logs.go:276] No container was found matching "coredns"
	I0114 10:57:18.076154  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0114 10:57:18.076224  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0114 10:57:18.100358  189998 cri.go:87] found id: ""
	I0114 10:57:18.100378  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.100384  189998 logs.go:276] No container was found matching "kube-scheduler"
	I0114 10:57:18.100389  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0114 10:57:18.100428  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0114 10:57:18.123283  189998 cri.go:87] found id: ""
	I0114 10:57:18.123310  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.123318  189998 logs.go:276] No container was found matching "kube-proxy"
	I0114 10:57:18.123325  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0114 10:57:18.123386  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0114 10:57:18.146331  189998 cri.go:87] found id: ""
	I0114 10:57:18.146357  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.146364  189998 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 10:57:18.146372  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0114 10:57:18.146420  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0114 10:57:18.170499  189998 cri.go:87] found id: ""
	I0114 10:57:18.170523  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.170538  189998 logs.go:276] No container was found matching "storage-provisioner"
	I0114 10:57:18.170546  189998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0114 10:57:18.170597  189998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0114 10:57:18.195071  189998 cri.go:87] found id: ""
	I0114 10:57:18.195093  189998 logs.go:274] 0 containers: []
	W0114 10:57:18.195100  189998 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 10:57:18.195111  189998 logs.go:123] Gathering logs for kubelet ...
	I0114 10:57:18.195125  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0114 10:57:18.212387  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12529]: E0114 10:56:28.256460   12529 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.212758  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12539]: E0114 10:56:28.983861   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.213115  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:29 kubernetes-upgrade-104742 kubelet[12551]: E0114 10:56:29.737789   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.213474  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:30 kubernetes-upgrade-104742 kubelet[12562]: E0114 10:56:30.501487   12562 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.213819  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:31 kubernetes-upgrade-104742 kubelet[12573]: E0114 10:56:31.281858   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.214177  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:31 kubernetes-upgrade-104742 kubelet[12584]: E0114 10:56:31.994205   12584 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.214529  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:32 kubernetes-upgrade-104742 kubelet[12595]: E0114 10:56:32.747308   12595 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.214873  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:33 kubernetes-upgrade-104742 kubelet[12606]: E0114 10:56:33.490167   12606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.215223  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:34 kubernetes-upgrade-104742 kubelet[12617]: E0114 10:56:34.235997   12617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.215582  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:34 kubernetes-upgrade-104742 kubelet[12628]: E0114 10:56:34.985443   12628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.215980  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:35 kubernetes-upgrade-104742 kubelet[12639]: E0114 10:56:35.745594   12639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.216331  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:36 kubernetes-upgrade-104742 kubelet[12650]: E0114 10:56:36.498581   12650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.216676  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:37 kubernetes-upgrade-104742 kubelet[12662]: E0114 10:56:37.249571   12662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.217031  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:38 kubernetes-upgrade-104742 kubelet[12672]: E0114 10:56:38.008471   12672 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.217382  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:38 kubernetes-upgrade-104742 kubelet[12682]: E0114 10:56:38.763526   12682 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.217737  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:39 kubernetes-upgrade-104742 kubelet[12693]: E0114 10:56:39.492292   12693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.218085  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:40 kubernetes-upgrade-104742 kubelet[12704]: E0114 10:56:40.251896   12704 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.218440  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:40 kubernetes-upgrade-104742 kubelet[12714]: E0114 10:56:40.991722   12714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.218787  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:41 kubernetes-upgrade-104742 kubelet[12725]: E0114 10:56:41.762343   12725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.219140  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:42 kubernetes-upgrade-104742 kubelet[12734]: E0114 10:56:42.537442   12734 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.219496  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:43 kubernetes-upgrade-104742 kubelet[12743]: E0114 10:56:43.263426   12743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.219862  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:44 kubernetes-upgrade-104742 kubelet[12752]: E0114 10:56:44.028905   12752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.220226  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:44 kubernetes-upgrade-104742 kubelet[12763]: E0114 10:56:44.790619   12763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.220579  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:45 kubernetes-upgrade-104742 kubelet[12773]: E0114 10:56:45.497745   12773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.220922  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:46 kubernetes-upgrade-104742 kubelet[12784]: E0114 10:56:46.240817   12784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.221280  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:46 kubernetes-upgrade-104742 kubelet[12794]: E0114 10:56:46.985491   12794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.221634  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:47 kubernetes-upgrade-104742 kubelet[12806]: E0114 10:56:47.733518   12806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.221977  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:48 kubernetes-upgrade-104742 kubelet[12817]: E0114 10:56:48.494874   12817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.222336  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:49 kubernetes-upgrade-104742 kubelet[12829]: E0114 10:56:49.285800   12829 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.222679  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:49 kubernetes-upgrade-104742 kubelet[12839]: E0114 10:56:49.991740   12839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.223023  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:50 kubernetes-upgrade-104742 kubelet[12850]: E0114 10:56:50.744969   12850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.223376  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:51 kubernetes-upgrade-104742 kubelet[12862]: E0114 10:56:51.507538   12862 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.223755  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:52 kubernetes-upgrade-104742 kubelet[12872]: E0114 10:56:52.265215   12872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.224105  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:53 kubernetes-upgrade-104742 kubelet[12882]: E0114 10:56:53.002039   12882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.224459  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:53 kubernetes-upgrade-104742 kubelet[12892]: E0114 10:56:53.746428   12892 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.224811  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:54 kubernetes-upgrade-104742 kubelet[12901]: E0114 10:56:54.504240   12901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.225164  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:55 kubernetes-upgrade-104742 kubelet[12911]: E0114 10:56:55.249839   12911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.225510  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:56 kubernetes-upgrade-104742 kubelet[12922]: E0114 10:56:56.003330   12922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.225873  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:56 kubernetes-upgrade-104742 kubelet[12932]: E0114 10:56:56.756370   12932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.226221  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:57 kubernetes-upgrade-104742 kubelet[12941]: E0114 10:56:57.501513   12941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.226573  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:58 kubernetes-upgrade-104742 kubelet[12952]: E0114 10:56:58.276122   12952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.226920  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:58 kubernetes-upgrade-104742 kubelet[12962]: E0114 10:56:58.994819   12962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.227277  189998 logs.go:138] Found kubelet problem: Jan 14 10:56:59 kubernetes-upgrade-104742 kubelet[12972]: E0114 10:56:59.747758   12972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.227620  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:00 kubernetes-upgrade-104742 kubelet[12981]: E0114 10:57:00.491874   12981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.227982  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:01 kubernetes-upgrade-104742 kubelet[12992]: E0114 10:57:01.238429   12992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.228331  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:01 kubernetes-upgrade-104742 kubelet[13002]: E0114 10:57:01.990521   13002 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.228676  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:02 kubernetes-upgrade-104742 kubelet[13013]: E0114 10:57:02.737359   13013 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.229021  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:03 kubernetes-upgrade-104742 kubelet[13025]: E0114 10:57:03.484814   13025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.229376  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:04 kubernetes-upgrade-104742 kubelet[13035]: E0114 10:57:04.235719   13035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.229730  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:05 kubernetes-upgrade-104742 kubelet[13046]: E0114 10:57:05.005556   13046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.230076  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:05 kubernetes-upgrade-104742 kubelet[13056]: E0114 10:57:05.773972   13056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.230424  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:06 kubernetes-upgrade-104742 kubelet[13067]: E0114 10:57:06.495256   13067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.230773  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:07 kubernetes-upgrade-104742 kubelet[13079]: E0114 10:57:07.240391   13079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.231116  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:07 kubernetes-upgrade-104742 kubelet[13090]: E0114 10:57:07.993801   13090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.231470  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:08 kubernetes-upgrade-104742 kubelet[13100]: E0114 10:57:08.738803   13100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.231901  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:09 kubernetes-upgrade-104742 kubelet[13111]: E0114 10:57:09.491099   13111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.232262  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:10 kubernetes-upgrade-104742 kubelet[13122]: E0114 10:57:10.236116   13122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.232607  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:10 kubernetes-upgrade-104742 kubelet[13133]: E0114 10:57:10.993398   13133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.232951  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:11 kubernetes-upgrade-104742 kubelet[13144]: E0114 10:57:11.741062   13144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.233308  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:12 kubernetes-upgrade-104742 kubelet[13155]: E0114 10:57:12.497453   13155 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.233669  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:13 kubernetes-upgrade-104742 kubelet[13165]: E0114 10:57:13.239621   13165 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.234015  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:13 kubernetes-upgrade-104742 kubelet[13176]: E0114 10:57:13.994697   13176 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.234366  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:14 kubernetes-upgrade-104742 kubelet[13185]: E0114 10:57:14.740831   13185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.234743  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:15 kubernetes-upgrade-104742 kubelet[13196]: E0114 10:57:15.493497   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.235092  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:16 kubernetes-upgrade-104742 kubelet[13207]: E0114 10:57:16.244127   13207 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.235450  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:16 kubernetes-upgrade-104742 kubelet[13217]: E0114 10:57:16.987479   13217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0114 10:57:18.235819  189998 logs.go:138] Found kubelet problem: Jan 14 10:57:17 kubernetes-upgrade-104742 kubelet[13228]: E0114 10:57:17.738714   13228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.235936  189998 logs.go:123] Gathering logs for dmesg ...
	I0114 10:57:18.235952  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 10:57:18.253006  189998 logs.go:123] Gathering logs for describe nodes ...
	I0114 10:57:18.253035  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 10:57:18.307257  189998 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 10:57:18.307290  189998 logs.go:123] Gathering logs for containerd ...
	I0114 10:57:18.307303  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0114 10:57:18.364637  189998 logs.go:123] Gathering logs for container status ...
	I0114 10:57:18.364672  189998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0114 10:57:18.391231  189998 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 10:57:18.391270  189998 out.go:239] * 
	W0114 10:57:18.391443  189998 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:57:18.391466  189998 out.go:239] * 
	W0114 10:57:18.392419  189998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 10:57:18.395348  189998 out.go:177] X Problems detected in kubelet:
	I0114 10:57:18.396939  189998 out.go:177]   Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12529]: E0114 10:56:28.256460   12529 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.398446  189998 out.go:177]   Jan 14 10:56:28 kubernetes-upgrade-104742 kubelet[12539]: E0114 10:56:28.983861   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.399845  189998 out.go:177]   Jan 14 10:56:29 kubernetes-upgrade-104742 kubelet[12551]: E0114 10:56:29.737789   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0114 10:57:18.403546  189998 out.go:177] 
	W0114 10:57:18.405206  189998 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0114 10:55:22.100120   11447 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 10:57:18.405335  189998 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 10:57:18.405403  189998 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 10:57:18.407110  189998 out.go:177] 
	I0114 10:57:18.060019  247733 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503102 seconds
	I0114 10:57:18.060223  247733 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 10:57:18.069711  247733 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 10:57:18.588260  247733 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 10:57:18.588544  247733 kubeadm.go:317] [mark-control-plane] Marking the node newest-cni-105657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 10:57:19.096682  247733 kubeadm.go:317] [bootstrap-token] Using token: z61fmo.hmvecozynsfjwfcb
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2023-01-14 10:48:40 UTC, end at Sat 2023-01-14 10:57:20 UTC. --
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.845796970Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.863821087Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.863884832Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.881297749Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.881358933Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.898466110Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.898540369Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.914603044Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.914664375Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.931626197Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.931724881Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.948381264Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.948448110Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.965217901Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.965272711Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.981508609Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.981561532Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.998099767Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:21 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:21.998153688Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:22 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:22.014766961Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:22 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:22.014823845Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:22 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:22.030655354Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:22 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:22.030705699Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 14 10:55:22 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:22.046781206Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 14 10:55:22 kubernetes-upgrade-104742 containerd[497]: time="2023-01-14T10:55:22.046849986Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000001] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +4.191672] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000006] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000001] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000001] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +8.187423] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000006] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000002] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e047e4f644b2
	[  +0.000001] ll header: 00000000: 02 42 59 7a 57 03 02 42 c0 a8 55 02 08 00
	[  +8.682461] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-db16916179cc
	[  +0.000006] ll header: 00000000: 02 42 7d c1 7d 38 02 42 c0 a8 5e 02 08 00
	[  +1.012858] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-db16916179cc
	[  +0.000007] ll header: 00000000: 02 42 7d c1 7d 38 02 42 c0 a8 5e 02 08 00
	[  +2.015848] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-db16916179cc
	[  +0.000006] ll header: 00000000: 02 42 7d c1 7d 38 02 42 c0 a8 5e 02 08 00
	[  +4.163687] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-db16916179cc
	[  +0.000006] ll header: 00000000: 02 42 7d c1 7d 38 02 42 c0 a8 5e 02 08 00
	[  +8.187458] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-db16916179cc
	[  +0.000007] ll header: 00000000: 02 42 7d c1 7d 38 02 42 c0 a8 5e 02 08 00
	
	* 
	* ==> kernel <==
	*  10:57:20 up  1:39,  0 users,  load average: 5.35, 2.99, 2.05
	Linux kubernetes-upgrade-104742 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:48:40 UTC, end at Sat 2023-01-14 10:57:20 UTC. --
	Jan 14 10:57:16 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 10:57:17 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Jan 14 10:57:17 kubernetes-upgrade-104742 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:17 kubernetes-upgrade-104742 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:17 kubernetes-upgrade-104742 kubelet[13228]: E0114 10:57:17.738714   13228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 14 10:57:17 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 14 10:57:17 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 10:57:18 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Jan 14 10:57:18 kubernetes-upgrade-104742 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:18 kubernetes-upgrade-104742 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:18 kubernetes-upgrade-104742 kubelet[13379]: E0114 10:57:18.528819   13379 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 14 10:57:18 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 14 10:57:18 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:19 kubernetes-upgrade-104742 kubelet[13398]: E0114 10:57:19.233628   13398 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:19 kubernetes-upgrade-104742 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 10:57:20 kubernetes-upgrade-104742 kubelet[13492]: E0114 10:57:20.014102   13492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 14 10:57:20 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 14 10:57:20 kubernetes-upgrade-104742 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 10:57:20.161125  251129 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-104742 -n kubernetes-upgrade-104742
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-104742 -n kubernetes-upgrade-104742: exit status 2 (424.776805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-104742" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-104742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-104742
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-104742: (2.1672236s)
--- FAIL: TestKubernetesUpgrade (580.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (516.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-104616 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E0114 10:58:30.855735   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:58:33.934430   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-104616 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m36.182396115s)

                                                
                                                
-- stdout --
	* [calico-104616] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-104616 in cluster calico-104616
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:58:26.970433  269956 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:58:26.971028  269956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:58:26.971044  269956 out.go:309] Setting ErrFile to fd 2...
	I0114 10:58:26.971052  269956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:58:26.972260  269956 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:58:26.973277  269956 out.go:303] Setting JSON to false
	I0114 10:58:26.975409  269956 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6054,"bootTime":1673687853,"procs":1126,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:58:26.975484  269956 start.go:135] virtualization: kvm guest
	I0114 10:58:26.978078  269956 out.go:177] * [calico-104616] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:58:26.980010  269956 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:58:26.979945  269956 notify.go:220] Checking for updates...
	I0114 10:58:26.982743  269956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:58:26.984188  269956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:58:26.985535  269956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:58:26.986934  269956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:58:26.988841  269956 config.go:180] Loaded profile config "cilium-104616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:58:26.988957  269956 config.go:180] Loaded profile config "default-k8s-diff-port-105641": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:58:26.989065  269956 config.go:180] Loaded profile config "kindnet-104615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:58:26.989131  269956 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:58:27.018828  269956 docker.go:138] docker version: linux-20.10.22
	I0114 10:58:27.018962  269956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:58:27.154298  269956 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-14 10:58:27.045104954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:58:27.154434  269956 docker.go:255] overlay module found
	I0114 10:58:27.157075  269956 out.go:177] * Using the docker driver based on user configuration
	I0114 10:58:27.158593  269956 start.go:294] selected driver: docker
	I0114 10:58:27.158606  269956 start.go:838] validating driver "docker" against <nil>
	I0114 10:58:27.158628  269956 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:58:27.159834  269956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:58:27.286310  269956 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-14 10:58:27.186536161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:58:27.286448  269956 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 10:58:27.286736  269956 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:58:27.288988  269956 out.go:177] * Using Docker driver with root privileges
	I0114 10:58:27.290513  269956 cni.go:95] Creating CNI manager for "calico"
	I0114 10:58:27.290551  269956 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0114 10:58:27.290563  269956 start_flags.go:319] config:
	{Name:calico-104616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-104616 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:58:27.292572  269956 out.go:177] * Starting control plane node calico-104616 in cluster calico-104616
	I0114 10:58:27.293850  269956 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:58:27.295276  269956 out.go:177] * Pulling base image ...
	I0114 10:58:27.296759  269956 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:58:27.296814  269956 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:58:27.296842  269956 cache.go:57] Caching tarball of preloaded images
	I0114 10:58:27.297012  269956 preload.go:174] Found /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 10:58:27.297037  269956 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0114 10:58:27.297173  269956 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/config.json ...
	I0114 10:58:27.297206  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/config.json: {Name:mk6af617d9f79d45a7a2b9a6cd8ec96525bbdf23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:27.297355  269956 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:58:27.327166  269956 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 10:58:27.327194  269956 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 10:58:27.327217  269956 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:58:27.327256  269956 start.go:364] acquiring machines lock for calico-104616: {Name:mk3209983eccab2d59de2b21d094933ee0a669fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:58:27.327383  269956 start.go:368] acquired machines lock for "calico-104616" in 107.759µs
	I0114 10:58:27.327405  269956 start.go:93] Provisioning new machine with config: &{Name:calico-104616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-104616 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:58:27.327513  269956 start.go:125] createHost starting for "" (driver="docker")
	I0114 10:58:27.329691  269956 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0114 10:58:27.329994  269956 start.go:159] libmachine.API.Create for "calico-104616" (driver="docker")
	I0114 10:58:27.330027  269956 client.go:168] LocalClient.Create starting
	I0114 10:58:27.330136  269956 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem
	I0114 10:58:27.330173  269956 main.go:134] libmachine: Decoding PEM data...
	I0114 10:58:27.330192  269956 main.go:134] libmachine: Parsing certificate...
	I0114 10:58:27.330263  269956 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem
	I0114 10:58:27.330282  269956 main.go:134] libmachine: Decoding PEM data...
	I0114 10:58:27.330295  269956 main.go:134] libmachine: Parsing certificate...
	I0114 10:58:27.330718  269956 cli_runner.go:164] Run: docker network inspect calico-104616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 10:58:27.376743  269956 cli_runner.go:211] docker network inspect calico-104616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 10:58:27.376842  269956 network_create.go:280] running [docker network inspect calico-104616] to gather additional debugging logs...
	I0114 10:58:27.376862  269956 cli_runner.go:164] Run: docker network inspect calico-104616
	W0114 10:58:27.408761  269956 cli_runner.go:211] docker network inspect calico-104616 returned with exit code 1
	I0114 10:58:27.408802  269956 network_create.go:283] error running [docker network inspect calico-104616]: docker network inspect calico-104616: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-104616
	I0114 10:58:27.408815  269956 network_create.go:285] output of [docker network inspect calico-104616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-104616
	
	** /stderr **
	I0114 10:58:27.408857  269956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:58:27.451550  269956 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fb325bb87cdc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ae:d7:be:f3}}
	I0114 10:58:27.453128  269956 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8d666bf786b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:59:8a:0a:b1}}
	I0114 10:58:27.455197  269956 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000a9cdc8] misses:0}
	I0114 10:58:27.455282  269956 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 10:58:27.455315  269956 network_create.go:123] attempt to create docker network calico-104616 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 10:58:27.455412  269956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-104616 calico-104616
	I0114 10:58:27.558567  269956 network_create.go:107] docker network calico-104616 192.168.67.0/24 created
	I0114 10:58:27.558610  269956 kic.go:117] calculated static IP "192.168.67.2" for the "calico-104616" container
	I0114 10:58:27.558682  269956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 10:58:27.606393  269956 cli_runner.go:164] Run: docker volume create calico-104616 --label name.minikube.sigs.k8s.io=calico-104616 --label created_by.minikube.sigs.k8s.io=true
	I0114 10:58:27.645706  269956 oci.go:103] Successfully created a docker volume calico-104616
	I0114 10:58:27.645789  269956 cli_runner.go:164] Run: docker run --rm --name calico-104616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-104616 --entrypoint /usr/bin/test -v calico-104616:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 10:58:28.382067  269956 oci.go:107] Successfully prepared a docker volume calico-104616
	I0114 10:58:28.382121  269956 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:58:28.382146  269956 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 10:58:28.382219  269956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-104616:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 10:58:34.930184  269956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-104616:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.547888679s)
	I0114 10:58:34.930223  269956 kic.go:199] duration metric: took 6.548073 seconds to extract preloaded images to volume
	W0114 10:58:34.930381  269956 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0114 10:58:34.930505  269956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 10:58:35.105659  269956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-104616 --name calico-104616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-104616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-104616 --network calico-104616 --ip 192.168.67.2 --volume calico-104616:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 10:58:35.768522  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Running}}
	I0114 10:58:35.803881  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Status}}
	I0114 10:58:35.849269  269956 cli_runner.go:164] Run: docker exec calico-104616 stat /var/lib/dpkg/alternatives/iptables
	I0114 10:58:35.964764  269956 oci.go:144] the created container "calico-104616" has a running status.
	I0114 10:58:35.964790  269956 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa...
	I0114 10:58:36.042985  269956 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 10:58:36.138381  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Status}}
	I0114 10:58:36.189047  269956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 10:58:36.189081  269956 kic_runner.go:114] Args: [docker exec --privileged calico-104616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 10:58:36.274364  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Status}}
	I0114 10:58:36.306601  269956 machine.go:88] provisioning docker machine ...
	I0114 10:58:36.306640  269956 ubuntu.go:169] provisioning hostname "calico-104616"
	I0114 10:58:36.306722  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:36.337419  269956 main.go:134] libmachine: Using SSH client type: native
	I0114 10:58:36.337664  269956 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0114 10:58:36.337686  269956 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-104616 && echo "calico-104616" | sudo tee /etc/hostname
	I0114 10:58:36.466750  269956 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-104616
	
	I0114 10:58:36.466829  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:36.493570  269956 main.go:134] libmachine: Using SSH client type: native
	I0114 10:58:36.493715  269956 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0114 10:58:36.493740  269956 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-104616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-104616/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-104616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:58:36.611376  269956 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:58:36.611404  269956 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-3818/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-3818/.minikube}
	I0114 10:58:36.611421  269956 ubuntu.go:177] setting up certificates
	I0114 10:58:36.611428  269956 provision.go:83] configureAuth start
	I0114 10:58:36.611492  269956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-104616
	I0114 10:58:36.635789  269956 provision.go:138] copyHostCerts
	I0114 10:58:36.635840  269956 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem, removing ...
	I0114 10:58:36.635847  269956 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem
	I0114 10:58:36.635912  269956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/ca.pem (1078 bytes)
	I0114 10:58:36.635988  269956 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem, removing ...
	I0114 10:58:36.635999  269956 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem
	I0114 10:58:36.636023  269956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/cert.pem (1123 bytes)
	I0114 10:58:36.636093  269956 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem, removing ...
	I0114 10:58:36.636101  269956 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem
	I0114 10:58:36.636124  269956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-3818/.minikube/key.pem (1675 bytes)
	I0114 10:58:36.636178  269956 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem org=jenkins.calico-104616 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-104616]
	I0114 10:58:36.786148  269956 provision.go:172] copyRemoteCerts
	I0114 10:58:36.786211  269956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:58:36.786255  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:36.812857  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:58:36.901608  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:58:36.923320  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0114 10:58:36.941307  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:58:36.968577  269956 provision.go:86] duration metric: configureAuth took 357.136267ms
	I0114 10:58:36.968605  269956 ubuntu.go:193] setting minikube options for container-runtime
	I0114 10:58:36.968811  269956 config.go:180] Loaded profile config "calico-104616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:58:36.968827  269956 machine.go:91] provisioned docker machine in 662.201404ms
	I0114 10:58:36.968835  269956 client.go:171] LocalClient.Create took 9.638803128s
	I0114 10:58:36.968851  269956 start.go:167] duration metric: libmachine.API.Create for "calico-104616" took 9.638857708s
	I0114 10:58:36.968860  269956 start.go:300] post-start starting for "calico-104616" (driver="docker")
	I0114 10:58:36.968867  269956 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:58:36.968935  269956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:58:36.968973  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:36.998880  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:58:37.082824  269956 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:58:37.085696  269956 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:58:37.085716  269956 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:58:37.085726  269956 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:58:37.085740  269956 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 10:58:37.085751  269956 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/addons for local assets ...
	I0114 10:58:37.085799  269956 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3818/.minikube/files for local assets ...
	I0114 10:58:37.085864  269956 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem -> 103062.pem in /etc/ssl/certs
	I0114 10:58:37.085936  269956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:58:37.092517  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:58:37.109794  269956 start.go:303] post-start completed in 140.922236ms
	I0114 10:58:37.110128  269956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-104616
	I0114 10:58:37.141254  269956 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/config.json ...
	I0114 10:58:37.141582  269956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:58:37.141640  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:37.173668  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:58:37.262174  269956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 10:58:37.266621  269956 start.go:128] duration metric: createHost completed in 9.939095159s
	I0114 10:58:37.266643  269956 start.go:83] releasing machines lock for "calico-104616", held for 9.939251511s
	I0114 10:58:37.266744  269956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-104616
	I0114 10:58:37.292317  269956 ssh_runner.go:195] Run: cat /version.json
	I0114 10:58:37.292371  269956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 10:58:37.292377  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:37.292425  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:58:37.318244  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:58:37.330906  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:58:37.410733  269956 ssh_runner.go:195] Run: systemctl --version
	I0114 10:58:37.444101  269956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:58:37.456060  269956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:58:37.466533  269956 docker.go:189] disabling docker service ...
	I0114 10:58:37.466597  269956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:58:37.483160  269956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:58:37.492626  269956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:58:37.580662  269956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:58:37.667760  269956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:58:37.677509  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:58:37.691083  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0114 10:58:37.699998  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:58:37.708089  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:58:37.716295  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0114 10:58:37.724475  269956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:58:37.730917  269956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:58:37.737218  269956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:58:37.817558  269956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:58:37.907336  269956 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:58:37.907412  269956 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:58:37.910914  269956 start.go:472] Will wait 60s for crictl version
	I0114 10:58:37.911006  269956 ssh_runner.go:195] Run: which crictl
	I0114 10:58:37.914464  269956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:58:37.949886  269956 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0114 10:58:37.949951  269956 ssh_runner.go:195] Run: containerd --version
	I0114 10:58:37.978026  269956 ssh_runner.go:195] Run: containerd --version
	I0114 10:58:38.009327  269956 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0114 10:58:38.010680  269956 cli_runner.go:164] Run: docker network inspect calico-104616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 10:58:38.040953  269956 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0114 10:58:38.048349  269956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:58:38.065059  269956 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:58:38.065130  269956 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:58:38.092077  269956 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:58:38.092106  269956 containerd.go:467] Images already preloaded, skipping extraction
	I0114 10:58:38.092156  269956 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:58:38.115289  269956 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:58:38.115309  269956 cache_images.go:84] Images are preloaded, skipping loading
	I0114 10:58:38.115354  269956 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:58:38.142711  269956 cni.go:95] Creating CNI manager for "calico"
	I0114 10:58:38.142742  269956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:58:38.142766  269956 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-104616 NodeName:calico-104616 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:58:38.142938  269956 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-104616"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:58:38.143063  269956 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-104616 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-104616 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0114 10:58:38.143130  269956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:58:38.150854  269956 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:58:38.150925  269956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:58:38.158171  269956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0114 10:58:38.171548  269956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:58:38.185014  269956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes)
	I0114 10:58:38.199505  269956 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 10:58:38.202407  269956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 10:58:38.211038  269956 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616 for IP: 192.168.67.2
	I0114 10:58:38.211125  269956 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key
	I0114 10:58:38.211174  269956 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key
	I0114 10:58:38.211215  269956 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/client.key
	I0114 10:58:38.211235  269956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/client.crt with IP's: []
	I0114 10:58:38.324048  269956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/client.crt ...
	I0114 10:58:38.324072  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/client.crt: {Name:mk1938aedcedeed85944cf8b71c1e7b36416bcf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:38.324274  269956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/client.key ...
	I0114 10:58:38.324293  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/client.key: {Name:mk48265db88cb992e3c0f5727d0ec10d2c1170c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:38.324429  269956 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.key.c7fa3a9e
	I0114 10:58:38.324450  269956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 10:58:38.551345  269956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.crt.c7fa3a9e ...
	I0114 10:58:38.551378  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.crt.c7fa3a9e: {Name:mkcd45b72270702f09aad73a190b570f09fc9566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:38.551608  269956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.key.c7fa3a9e ...
	I0114 10:58:38.551627  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.key.c7fa3a9e: {Name:mkf815177eb57bf94b381cc85b26b049b9135cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:38.551790  269956 certs.go:320] copying /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.crt
	I0114 10:58:38.551875  269956 certs.go:324] copying /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.key
	I0114 10:58:38.551953  269956 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.key
	I0114 10:58:38.551975  269956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.crt with IP's: []
	I0114 10:58:38.625049  269956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.crt ...
	I0114 10:58:38.625074  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.crt: {Name:mkbff43c9253ee9de8f3e95be7d391998fce0e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:38.625285  269956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.key ...
	I0114 10:58:38.625299  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.key: {Name:mka781c9d6df2bd5d0232edac75cc3fe7169e967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:58:38.625459  269956 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem (1338 bytes)
	W0114 10:58:38.625504  269956 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306_empty.pem, impossibly tiny 0 bytes
	I0114 10:58:38.625519  269956 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:58:38.625549  269956 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:58:38.625579  269956 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:58:38.625601  269956 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/certs/home/jenkins/minikube-integration/15642-3818/.minikube/certs/key.pem (1675 bytes)
	I0114 10:58:38.625664  269956 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem (1708 bytes)
	I0114 10:58:38.626263  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:58:38.645227  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 10:58:38.664117  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:58:38.681644  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/calico-104616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:58:38.699724  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:58:38.716618  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 10:58:38.735469  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:58:38.755524  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 10:58:38.773803  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/ssl/certs/103062.pem --> /usr/share/ca-certificates/103062.pem (1708 bytes)
	I0114 10:58:38.790301  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:58:38.807313  269956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-3818/.minikube/certs/10306.pem --> /usr/share/ca-certificates/10306.pem (1338 bytes)
	I0114 10:58:38.824209  269956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:58:38.837377  269956 ssh_runner.go:195] Run: openssl version
	I0114 10:58:38.842632  269956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:58:38.851417  269956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:58:38.854768  269956 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:58:38.854826  269956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:58:38.860885  269956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:58:38.868544  269956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10306.pem && ln -fs /usr/share/ca-certificates/10306.pem /etc/ssl/certs/10306.pem"
	I0114 10:58:38.877355  269956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10306.pem
	I0114 10:58:38.880851  269956 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/10306.pem
	I0114 10:58:38.880932  269956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10306.pem
	I0114 10:58:38.886284  269956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10306.pem /etc/ssl/certs/51391683.0"
	I0114 10:58:38.893854  269956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103062.pem && ln -fs /usr/share/ca-certificates/103062.pem /etc/ssl/certs/103062.pem"
	I0114 10:58:38.901517  269956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103062.pem
	I0114 10:58:38.904651  269956 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/103062.pem
	I0114 10:58:38.904720  269956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103062.pem
	I0114 10:58:38.909631  269956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103062.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:58:38.916792  269956 kubeadm.go:396] StartCluster: {Name:calico-104616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-104616 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:58:38.916896  269956 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:58:38.916930  269956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:58:38.941469  269956 cri.go:87] found id: ""
	I0114 10:58:38.941539  269956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:58:38.948446  269956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:58:38.956062  269956 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 10:58:38.956115  269956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:58:38.962903  269956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:58:38.962937  269956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 10:58:39.004971  269956 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 10:58:39.005098  269956 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:58:39.034273  269956 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0114 10:58:39.034361  269956 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1027-gcp
	I0114 10:58:39.034426  269956 kubeadm.go:317] OS: Linux
	I0114 10:58:39.034519  269956 kubeadm.go:317] CGROUPS_CPU: enabled
	I0114 10:58:39.034617  269956 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0114 10:58:39.034692  269956 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0114 10:58:39.034741  269956 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0114 10:58:39.034786  269956 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0114 10:58:39.034829  269956 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0114 10:58:39.034900  269956 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0114 10:58:39.034978  269956 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0114 10:58:39.035022  269956 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0114 10:58:39.103902  269956 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 10:58:39.104023  269956 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 10:58:39.104153  269956 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 10:58:39.226049  269956 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:58:39.228787  269956 out.go:204]   - Generating certificates and keys ...
	I0114 10:58:39.228928  269956 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 10:58:39.229007  269956 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 10:58:39.405830  269956 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 10:58:39.598627  269956 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 10:58:39.759841  269956 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 10:58:39.856844  269956 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 10:58:39.945139  269956 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 10:58:39.945336  269956 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-104616 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 10:58:40.131603  269956 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 10:58:40.131802  269956 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-104616 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 10:58:40.287619  269956 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 10:58:40.500892  269956 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 10:58:40.793734  269956 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 10:58:40.793893  269956 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:58:40.930963  269956 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 10:58:41.068637  269956 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 10:58:41.161012  269956 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:58:41.559795  269956 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:58:41.571259  269956 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:58:41.572323  269956 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:58:41.572394  269956 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 10:58:41.660455  269956 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:58:41.662346  269956 out.go:204]   - Booting up control plane ...
	I0114 10:58:41.662476  269956 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:58:41.665261  269956 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:58:41.666540  269956 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:58:41.668383  269956 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:58:41.670684  269956 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 10:58:47.674230  269956 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.003511 seconds
	I0114 10:58:47.674479  269956 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 10:58:47.685019  269956 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 10:58:48.203053  269956 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 10:58:48.203225  269956 kubeadm.go:317] [mark-control-plane] Marking the node calico-104616 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 10:58:48.711346  269956 kubeadm.go:317] [bootstrap-token] Using token: t6017t.dvlu786s64wgutb7
	I0114 10:58:48.713436  269956 out.go:204]   - Configuring RBAC rules ...
	I0114 10:58:48.713593  269956 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0114 10:58:48.716347  269956 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0114 10:58:48.722137  269956 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0114 10:58:48.724836  269956 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0114 10:58:48.727240  269956 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0114 10:58:48.730225  269956 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0114 10:58:48.743589  269956 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0114 10:58:49.020507  269956 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0114 10:58:49.124069  269956 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0114 10:58:49.125966  269956 kubeadm.go:317] 
	I0114 10:58:49.126067  269956 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0114 10:58:49.126078  269956 kubeadm.go:317] 
	I0114 10:58:49.126203  269956 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0114 10:58:49.126211  269956 kubeadm.go:317] 
	I0114 10:58:49.126239  269956 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0114 10:58:49.126514  269956 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0114 10:58:49.126600  269956 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0114 10:58:49.126614  269956 kubeadm.go:317] 
	I0114 10:58:49.126690  269956 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0114 10:58:49.126704  269956 kubeadm.go:317] 
	I0114 10:58:49.126767  269956 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0114 10:58:49.126779  269956 kubeadm.go:317] 
	I0114 10:58:49.126831  269956 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0114 10:58:49.126918  269956 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0114 10:58:49.126991  269956 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0114 10:58:49.127002  269956 kubeadm.go:317] 
	I0114 10:58:49.127090  269956 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0114 10:58:49.127175  269956 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0114 10:58:49.127179  269956 kubeadm.go:317] 
	I0114 10:58:49.127244  269956 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token t6017t.dvlu786s64wgutb7 \
	I0114 10:58:49.127326  269956 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 \
	I0114 10:58:49.127343  269956 kubeadm.go:317] 	--control-plane 
	I0114 10:58:49.127346  269956 kubeadm.go:317] 
	I0114 10:58:49.127412  269956 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0114 10:58:49.127415  269956 kubeadm.go:317] 
	I0114 10:58:49.127478  269956 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token t6017t.dvlu786s64wgutb7 \
	I0114 10:58:49.127558  269956 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:2190edb54b87c9662b48e8fc1a937a97192e8b9f176ebe3ae080126998017520 
	I0114 10:58:49.131061  269956 kubeadm.go:317] W0114 10:58:38.997029     734 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0114 10:58:49.131242  269956 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0114 10:58:49.131343  269956 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:58:49.131366  269956 cni.go:95] Creating CNI manager for "calico"
	I0114 10:58:49.133400  269956 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0114 10:58:49.135029  269956 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 10:58:49.135052  269956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I0114 10:58:49.152604  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 10:58:50.605340  269956 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.452696477s)
	I0114 10:58:50.605395  269956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:58:50.605523  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:50.605528  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=calico-104616 minikube.k8s.io/updated_at=2023_01_14T10_58_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:50.624777  269956 ops.go:34] apiserver oom_adj: -16
	I0114 10:58:50.694199  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:51.293960  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:51.794066  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:52.293903  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:52.794250  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:53.294102  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:53.794312  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:54.293442  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:54.794054  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:55.294170  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:55.794098  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:56.293579  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:56.794098  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:57.294164  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:57.794356  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:58.293439  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:58.793575  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:59.294069  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:58:59.793451  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:59:00.294004  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:59:00.793538  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:59:01.294092  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:59:01.794260  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:59:02.293825  269956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:59:02.366218  269956 kubeadm.go:1067] duration metric: took 11.760755117s to wait for elevateKubeSystemPrivileges.
	I0114 10:59:02.366258  269956 kubeadm.go:398] StartCluster complete in 23.449475072s
	I0114 10:59:02.366284  269956 settings.go:142] acquiring lock: {Name:mk1c1a895c03873155a8c7da5f3762b351f9952d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:59:02.366406  269956 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:59:02.368283  269956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/kubeconfig: {Name:mk71090b236533c6578a1b526f82422ab6969707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:59:02.884323  269956 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-104616" rescaled to 1
	I0114 10:59:02.884379  269956 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:59:02.886044  269956 out.go:177] * Verifying Kubernetes components...
	I0114 10:59:02.884425  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 10:59:02.884457  269956 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0114 10:59:02.884641  269956 config.go:180] Loaded profile config "calico-104616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:59:02.887288  269956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:59:02.887334  269956 addons.go:65] Setting default-storageclass=true in profile "calico-104616"
	I0114 10:59:02.887362  269956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-104616"
	I0114 10:59:02.887338  269956 addons.go:65] Setting storage-provisioner=true in profile "calico-104616"
	I0114 10:59:02.887457  269956 addons.go:227] Setting addon storage-provisioner=true in "calico-104616"
	W0114 10:59:02.887464  269956 addons.go:236] addon storage-provisioner should already be in state true
	I0114 10:59:02.887501  269956 host.go:66] Checking if "calico-104616" exists ...
	I0114 10:59:02.887762  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Status}}
	I0114 10:59:02.888007  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Status}}
	I0114 10:59:02.931811  269956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:59:02.932519  269956 addons.go:227] Setting addon default-storageclass=true in "calico-104616"
	W0114 10:59:02.933102  269956 addons.go:236] addon default-storageclass should already be in state true
	I0114 10:59:02.933104  269956 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:59:02.933121  269956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 10:59:02.933132  269956 host.go:66] Checking if "calico-104616" exists ...
	I0114 10:59:02.933192  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:59:02.933485  269956 cli_runner.go:164] Run: docker container inspect calico-104616 --format={{.State.Status}}
	I0114 10:59:02.968213  269956 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 10:59:02.968231  269956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 10:59:02.968276  269956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104616
	I0114 10:59:02.977105  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:59:03.001218  269956 node_ready.go:35] waiting up to 5m0s for node "calico-104616" to be "Ready" ...
	I0114 10:59:03.001500  269956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0114 10:59:03.005637  269956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/calico-104616/id_rsa Username:docker}
	I0114 10:59:03.022152  269956 node_ready.go:49] node "calico-104616" has status "Ready":"True"
	I0114 10:59:03.022171  269956 node_ready.go:38] duration metric: took 20.926034ms waiting for node "calico-104616" to be "Ready" ...
	I0114 10:59:03.022179  269956 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:59:03.031351  269956 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace to be "Ready" ...
	I0114 10:59:03.139303  269956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:59:03.241198  269956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 10:59:04.820646  269956 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.819113424s)
	I0114 10:59:04.820696  269956 start.go:833] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0114 10:59:04.851924  269956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.712571344s)
	I0114 10:59:04.851967  269956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.610733305s)
	I0114 10:59:04.853765  269956 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 10:59:04.855028  269956 addons.go:488] enableAddons completed in 1.970577362s
	I0114 10:59:05.042363  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:07.042998  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:09.542606  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:11.542810  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:13.543048  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:15.543152  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:18.043304  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:20.543383  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:23.052985  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:25.543323  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:27.543407  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:30.042420  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:32.042871  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:34.043574  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:36.541870  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:38.543056  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:41.042291  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:43.542154  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:45.542799  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:48.042595  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:50.543164  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:53.043040  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:55.542046  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 10:59:57.542856  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:00.042112  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:02.042301  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:04.543484  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:07.044062  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:09.542824  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:11.542956  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:13.543871  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:16.042492  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:18.042975  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:20.043186  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:22.542050  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:24.542396  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:27.042990  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:29.542928  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:32.042509  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:34.542225  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:36.542706  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:38.543644  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:41.043663  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:43.542829  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:45.566587  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:48.042212  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:50.042772  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:52.542244  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:54.542363  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:56.542599  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:00:59.042400  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:01.042702  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:03.542430  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:05.542503  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:07.542592  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:10.042399  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:12.542123  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:14.542745  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:17.042476  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:19.042773  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:21.042892  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:23.542711  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:26.042540  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:28.542203  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:31.041689  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:33.043295  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:35.542761  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:37.542989  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:40.042294  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:42.042705  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:44.042884  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:46.542632  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:49.042196  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:51.542333  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:53.542969  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:56.042372  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:01:58.042744  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:00.542170  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:02.542932  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:05.041833  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:07.543056  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:10.041998  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:12.042148  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:14.541805  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:16.542044  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:19.041881  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:21.042117  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:23.043204  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:25.541773  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:27.543616  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:30.042049  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:32.042491  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:34.042845  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:36.542988  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:39.041765  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:41.042847  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:43.543569  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:46.041960  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:48.043425  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:50.542363  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:52.542741  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:54.543001  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:56.543092  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:02:58.543353  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:01.042996  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:03.046979  269956 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:03.047007  269956 pod_ready.go:81] duration metric: took 4m0.015624334s waiting for pod "calico-kube-controllers-7df895d496-hxksd" in "kube-system" namespace to be "Ready" ...
	E0114 11:03:03.047016  269956 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0114 11:03:03.047023  269956 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-rvmdg" in "kube-system" namespace to be "Ready" ...
	I0114 11:03:05.059453  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:07.059546  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:09.060177  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:11.560681  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:14.060978  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:16.567020  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:19.058484  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:21.059512  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:23.558924  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:26.059320  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:28.559532  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:31.059658  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:33.559081  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:35.559140  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:37.559279  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:39.559323  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:41.559408  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:44.059458  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:46.059799  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:48.559447  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:51.058997  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:53.060172  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:55.060632  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:57.559019  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:03:59.559657  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:02.059156  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:04.559479  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:07.059435  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:09.558925  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:11.559548  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:14.058786  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:16.059382  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:18.559473  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:20.561667  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:23.059980  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:25.559801  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:27.559913  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:30.059572  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:32.558952  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:34.559585  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:37.059970  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:39.559605  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:42.059073  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:44.059986  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:46.559795  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:49.059051  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:51.560223  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:54.058839  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:56.059761  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:04:58.559404  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:00.559512  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:03.060017  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:05.060124  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:07.559904  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:10.059394  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:12.059933  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:14.559102  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:16.559947  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:19.058764  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:21.059349  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:23.560126  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:26.059537  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:28.558919  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:30.559548  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:33.060298  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:35.560097  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:37.560611  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:40.059099  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:42.059461  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:44.059498  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:46.059850  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:48.559845  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:51.058873  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:53.059219  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:55.565513  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:05:58.059454  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:00.059897  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:02.559557  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:04.560062  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:07.060091  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:09.559527  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:12.059138  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:14.059345  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:16.060282  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:18.559640  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:20.560485  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:23.059425  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:25.059462  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:27.059664  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:29.559316  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:32.059380  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:34.059898  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:36.060020  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:38.558909  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:40.559384  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:43.059161  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:45.059200  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:47.559852  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:50.059051  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:52.559989  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:55.059644  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:06:57.560640  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:07:00.059113  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:07:02.059478  269956 pod_ready.go:102] pod "calico-node-rvmdg" in "kube-system" namespace has status "Ready":"False"
	I0114 11:07:03.063595  269956 pod_ready.go:81] duration metric: took 4m0.016561469s waiting for pod "calico-node-rvmdg" in "kube-system" namespace to be "Ready" ...
	E0114 11:07:03.063618  269956 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0114 11:07:03.063632  269956 pod_ready.go:38] duration metric: took 8m0.041444147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 11:07:03.066119  269956 out.go:177] 
	W0114 11:07:03.067693  269956 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0114 11:07:03.067718  269956 out.go:239] * 
	* 
	W0114 11:07:03.068628  269956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 11:07:03.070073  269956 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (516.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (339.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:00:56.157098   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 11:00:59.585161   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134155075s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:01:20.065772   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131430354s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:01:33.891098   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128162095s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132320587s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:02:01.026339   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129698507s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136062628s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:02:53.110766   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 11:02:55.811890   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12543741s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:03:11.966529   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:11.971744   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:11.981990   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:12.002242   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:12.043324   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:12.123494   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:12.284491   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:12.605466   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:13.246239   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:14.527158   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:17.087747   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132454831s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:03:22.208599   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:22.947406   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:03:32.448990   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:03:33.933876   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134237535s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:03:52.929728   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:03:58.539947   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:58.545192   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:58.555465   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:58.575763   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:58.616020   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:58.696332   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:58.856735   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:59.177410   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:03:59.818211   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:04:01.099132   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:04:03.659315   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:04:08.780409   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126266081s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:04:39.502210   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:05:05.230559   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:05:11.965847   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125112109s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137990068s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (339.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (370.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:04:33.890845   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140193265s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:04:44.750349   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:44.755641   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:44.765927   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:44.786209   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:44.826478   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:44.906788   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:45.067065   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:45.387765   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:46.028271   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:47.308735   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:49.869167   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:04:54.990102   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120995299s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143671316s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:05:20.462591   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
E0114 11:05:25.711243   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:05:27.808009   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139032591s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:05:39.103788   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:05:39.652897   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12467997s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:05:55.811831   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:06:06.671824   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:06:06.788268   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12499596s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139080098s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147379546s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126901574s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:07:53.110640   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121711486s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:08:11.966977   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
E0114 11:08:16.981924   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:08:33.934011   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 11:08:39.652920   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/auto-104615/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125669589s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:08:58.539863   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:09:26.223136   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137322349s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:09:44.750524   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
E0114 11:10:11.966296   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:10:12.432787   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:10:27.807885   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-104615 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122875722s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (370.74s)

                                                
                                    

Test pass (248/278)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 27.16
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.25.3/json-events 23.63
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.27
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
18 TestDownloadOnlyKic 5.33
19 TestBinaryMirror 0.82
20 TestOffline 59.43
22 TestAddons/Setup 102.25
24 TestAddons/parallel/Registry 20.52
25 TestAddons/parallel/Ingress 24.74
26 TestAddons/parallel/MetricsServer 5.46
27 TestAddons/parallel/HelmTiller 11.27
29 TestAddons/parallel/CSI 43.55
30 TestAddons/parallel/Headlamp 15.17
31 TestAddons/parallel/CloudSpanner 5.31
34 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/StoppedEnableDisable 20.19
36 TestCertOptions 27.88
37 TestCertExpiration 225.43
39 TestForceSystemdFlag 41.9
40 TestForceSystemdEnv 28.81
41 TestKVMDriverInstallOrUpdate 7.04
45 TestErrorSpam/setup 22.61
46 TestErrorSpam/start 0.92
47 TestErrorSpam/status 1.08
48 TestErrorSpam/pause 1.61
49 TestErrorSpam/unpause 1.56
50 TestErrorSpam/stop 1.51
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 43.05
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 15.31
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.09
61 TestFunctional/serial/CacheCmd/cache/add_remote 4.52
62 TestFunctional/serial/CacheCmd/cache/add_local 1.9
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.01
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.13
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
70 TestFunctional/serial/ExtraConfig 37.84
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.13
73 TestFunctional/serial/LogsFileCmd 1.14
75 TestFunctional/parallel/ConfigCmd 0.49
76 TestFunctional/parallel/DashboardCmd 8.45
77 TestFunctional/parallel/DryRun 0.54
78 TestFunctional/parallel/InternationalLanguage 0.24
79 TestFunctional/parallel/StatusCmd 1.4
82 TestFunctional/parallel/ServiceCmd 12.31
83 TestFunctional/parallel/ServiceCmdConnect 6.62
84 TestFunctional/parallel/AddonsCmd 0.23
85 TestFunctional/parallel/PersistentVolumeClaim 32.02
87 TestFunctional/parallel/SSHCmd 0.7
88 TestFunctional/parallel/CpCmd 1.48
89 TestFunctional/parallel/MySQL 23.53
90 TestFunctional/parallel/FileSync 0.34
91 TestFunctional/parallel/CertSync 2.24
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
99 TestFunctional/parallel/License 0.59
100 TestFunctional/parallel/Version/short 0.08
101 TestFunctional/parallel/Version/components 0.7
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.04
107 TestFunctional/parallel/ImageCommands/Setup 1.81
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.25
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.82
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.79
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.13
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
122 TestFunctional/parallel/ProfileCmd/profile_list 0.51
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
124 TestFunctional/parallel/MountCmd/any-port 10.75
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.7
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.13
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.87
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
132 TestFunctional/parallel/MountCmd/specific-port 2.07
133 TestFunctional/delete_addon-resizer_images 0.08
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
138 TestIngressAddonLegacy/StartLegacyK8sCluster 86.22
140 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.65
141 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.37
142 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.3
145 TestJSONOutput/start/Command 43.67
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.69
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.6
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 5.77
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.28
170 TestKicCustomNetwork/create_custom_network 42.74
171 TestKicCustomNetwork/use_default_bridge_network 27.03
172 TestKicExistingNetwork 26.88
173 TestKicCustomSubnet 28.25
174 TestKicStaticIP 28.38
175 TestMainNoArgs 0.07
176 TestMinikubeProfile 53.42
179 TestMountStart/serial/StartWithMountFirst 5.09
180 TestMountStart/serial/VerifyMountFirst 0.32
181 TestMountStart/serial/StartWithMountSecond 7.96
182 TestMountStart/serial/VerifyMountSecond 0.32
183 TestMountStart/serial/DeleteFirst 1.69
184 TestMountStart/serial/VerifyMountPostDelete 0.32
185 TestMountStart/serial/Stop 1.24
186 TestMountStart/serial/RestartStopped 6.75
187 TestMountStart/serial/VerifyMountPostStop 0.32
190 TestMultiNode/serial/FreshStart2Nodes 87.95
191 TestMultiNode/serial/DeployApp2Nodes 5.2
192 TestMultiNode/serial/PingHostFrom2Pods 0.9
193 TestMultiNode/serial/AddNode 33.21
194 TestMultiNode/serial/ProfileList 0.36
195 TestMultiNode/serial/CopyFile 11.51
196 TestMultiNode/serial/StopNode 2.34
197 TestMultiNode/serial/StartAfterStop 30.91
200 TestMultiNode/serial/StopMultiNode 21.39
201 TestMultiNode/serial/RestartMultiNode 101.89
202 TestMultiNode/serial/ValidateNameConflict 25.48
209 TestScheduledStopUnix 101.22
212 TestInsufficientStorage 15.13
213 TestRunningBinaryUpgrade 91.66
216 TestMissingContainerUpgrade 131.03
217 TestStoppedBinaryUpgrade/Setup 1.91
221 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
228 TestPause/serial/Start 67.44
229 TestNoKubernetes/serial/StartWithK8s 34.58
230 TestStoppedBinaryUpgrade/Upgrade 147.08
231 TestNoKubernetes/serial/StartWithStopK8s 19.11
232 TestNoKubernetes/serial/Start 5.12
233 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
234 TestNoKubernetes/serial/ProfileList 1.55
235 TestNoKubernetes/serial/Stop 1.45
236 TestNoKubernetes/serial/StartNoArgs 6.32
237 TestPause/serial/SecondStartNoReconfiguration 16.05
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
246 TestNetworkPlugins/group/false 0.48
250 TestPause/serial/Pause 1.3
251 TestPause/serial/VerifyStatus 0.37
252 TestPause/serial/Unpause 0.93
253 TestPause/serial/PauseAgain 1.19
254 TestPause/serial/DeletePaused 2.88
255 TestPause/serial/VerifyDeletedResources 0.53
256 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
258 TestStartStop/group/old-k8s-version/serial/FirstStart 124.63
260 TestStartStop/group/no-preload/serial/FirstStart 51.13
262 TestStartStop/group/embed-certs/serial/FirstStart 45.36
263 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
264 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.55
265 TestStartStop/group/old-k8s-version/serial/Stop 20.12
266 TestStartStop/group/no-preload/serial/DeployApp 8.38
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
268 TestStartStop/group/old-k8s-version/serial/SecondStart 431.91
269 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.62
270 TestStartStop/group/no-preload/serial/Stop 20.08
271 TestStartStop/group/embed-certs/serial/DeployApp 9.31
272 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.73
273 TestStartStop/group/embed-certs/serial/Stop 20.12
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
275 TestStartStop/group/no-preload/serial/SecondStart 314.4
276 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
277 TestStartStop/group/embed-certs/serial/SecondStart 313.45
278 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.02
279 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
280 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
281 TestStartStop/group/no-preload/serial/Pause 3.42
282 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.02
284 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.51
285 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
286 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
287 TestStartStop/group/embed-certs/serial/Pause 3.58
289 TestStartStop/group/newest-cni/serial/FirstStart 36.95
290 TestNetworkPlugins/group/auto/Start 48.4
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
292 TestStartStop/group/newest-cni/serial/DeployApp 0
293 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.57
294 TestStartStop/group/newest-cni/serial/Stop 1.29
295 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
296 TestStartStop/group/newest-cni/serial/SecondStart 30.07
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.8
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 20.21
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
302 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 563.38
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.45
304 TestStartStop/group/old-k8s-version/serial/Pause 3.24
305 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
308 TestStartStop/group/newest-cni/serial/Pause 3.31
309 TestNetworkPlugins/group/auto/KubeletFlags 0.38
310 TestNetworkPlugins/group/kindnet/Start 47.18
311 TestNetworkPlugins/group/auto/NetCatPod 12.3
312 TestNetworkPlugins/group/cilium/Start 92.2
313 TestNetworkPlugins/group/auto/DNS 0.14
314 TestNetworkPlugins/group/auto/Localhost 0.14
315 TestNetworkPlugins/group/auto/HairPin 0.14
317 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
319 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
320 TestNetworkPlugins/group/kindnet/DNS 0.15
321 TestNetworkPlugins/group/kindnet/Localhost 0.14
322 TestNetworkPlugins/group/kindnet/HairPin 0.15
323 TestNetworkPlugins/group/enable-default-cni/Start 297.44
324 TestNetworkPlugins/group/cilium/ControllerPod 5.02
325 TestNetworkPlugins/group/cilium/KubeletFlags 0.36
326 TestNetworkPlugins/group/cilium/NetCatPod 10.89
327 TestNetworkPlugins/group/cilium/DNS 0.14
328 TestNetworkPlugins/group/cilium/Localhost 0.13
329 TestNetworkPlugins/group/cilium/HairPin 0.12
330 TestNetworkPlugins/group/bridge/Start 39.35
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
332 TestNetworkPlugins/group/bridge/NetCatPod 9.24
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
x
+
TestDownloadOnly/v1.16.0/json-events (27.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-100553 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-100553 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (27.156033119s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (27.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-100553
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-100553: exit status 85 (82.393759ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100553 | jenkins | v1.28.0 | 14 Jan 23 10:05 UTC |          |
	|         | -p download-only-100553        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:05:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:05:54.024094   10318 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:05:54.024196   10318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:05:54.024204   10318 out.go:309] Setting ErrFile to fd 2...
	I0114 10:05:54.024208   10318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:05:54.024313   10318 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	W0114 10:05:54.024425   10318 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-3818/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-3818/.minikube/config/config.json: no such file or directory
	I0114 10:05:54.025007   10318 out.go:303] Setting JSON to true
	I0114 10:05:54.025840   10318 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2901,"bootTime":1673687853,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:05:54.025910   10318 start.go:135] virtualization: kvm guest
	I0114 10:05:54.028521   10318 out.go:97] [download-only-100553] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:05:54.028605   10318 notify.go:220] Checking for updates...
	W0114 10:05:54.028609   10318 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:05:54.030158   10318 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:05:54.032240   10318 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:05:54.033956   10318 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:05:54.035568   10318 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:05:54.037143   10318 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0114 10:05:54.039821   10318 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:05:54.039992   10318 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:05:54.065659   10318 docker.go:138] docker version: linux-20.10.22
	I0114 10:05:54.065759   10318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:05:54.831050   10318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:05:54.083838823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:05:54.831166   10318 docker.go:255] overlay module found
	I0114 10:05:54.833357   10318 out.go:97] Using the docker driver based on user configuration
	I0114 10:05:54.833412   10318 start.go:294] selected driver: docker
	I0114 10:05:54.833425   10318 start.go:838] validating driver "docker" against <nil>
	I0114 10:05:54.833535   10318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:05:54.933175   10318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:05:54.854507899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:05:54.933302   10318 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 10:05:54.933784   10318 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0114 10:05:54.933910   10318 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 10:05:54.936225   10318 out.go:169] Using Docker driver with root privileges
	I0114 10:05:54.937783   10318 cni.go:95] Creating CNI manager for ""
	I0114 10:05:54.937813   10318 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:05:54.937838   10318 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0114 10:05:54.937848   10318 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0114 10:05:54.937860   10318 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0114 10:05:54.937880   10318 start_flags.go:319] config:
	{Name:download-only-100553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:05:54.939727   10318 out.go:97] Starting control plane node download-only-100553 in cluster download-only-100553
	I0114 10:05:54.939779   10318 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:05:54.941355   10318 out.go:97] Pulling base image ...
	I0114 10:05:54.941393   10318 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0114 10:05:54.941494   10318 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:05:54.963555   10318 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 10:05:54.963913   10318 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0114 10:05:54.964001   10318 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 10:05:55.035429   10318 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0114 10:05:55.035459   10318 cache.go:57] Caching tarball of preloaded images
	I0114 10:05:55.035621   10318 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0114 10:05:55.038077   10318 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0114 10:05:55.038105   10318 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:05:55.136543   10318 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0114 10:06:10.623776   10318 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:06:10.623884   10318 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:06:11.486615   10318 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0114 10:06:11.486952   10318 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/download-only-100553/config.json ...
	I0114 10:06:11.486994   10318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/download-only-100553/config.json: {Name:mkac6a4bdca421e9fccf73d604dcd1eba59ec13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:06:11.487180   10318 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0114 10:06:11.487384   10318 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15642-3818/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100553"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (23.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-100553 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-100553 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (23.629674885s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (23.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-100553
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-100553: exit status 85 (84.98113ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100553 | jenkins | v1.28.0 | 14 Jan 23 10:05 UTC |          |
	|         | -p download-only-100553        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-100553 | jenkins | v1.28.0 | 14 Jan 23 10:06 UTC |          |
	|         | -p download-only-100553        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:06:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:06:21.260537   10484 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:06:21.260844   10484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:21.260856   10484 out.go:309] Setting ErrFile to fd 2...
	I0114 10:06:21.260864   10484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:21.261131   10484 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	W0114 10:06:21.261346   10484 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-3818/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-3818/.minikube/config/config.json: no such file or directory
	I0114 10:06:21.262176   10484 out.go:303] Setting JSON to true
	I0114 10:06:21.262950   10484 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2929,"bootTime":1673687853,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:06:21.263019   10484 start.go:135] virtualization: kvm guest
	I0114 10:06:21.265173   10484 out.go:97] [download-only-100553] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:06:21.265284   10484 notify.go:220] Checking for updates...
	I0114 10:06:21.266728   10484 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:06:21.268228   10484 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:06:21.269635   10484 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:06:21.271061   10484 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:06:21.272500   10484 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0114 10:06:21.274928   10484 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:06:21.275339   10484 config.go:180] Loaded profile config "download-only-100553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0114 10:06:21.275374   10484 start.go:746] api.Load failed for download-only-100553: filestore "download-only-100553": Docker machine "download-only-100553" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:06:21.275428   10484 driver.go:365] Setting default libvirt URI to qemu:///system
	W0114 10:06:21.275452   10484 start.go:746] api.Load failed for download-only-100553: filestore "download-only-100553": Docker machine "download-only-100553" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:06:21.299653   10484 docker.go:138] docker version: linux-20.10.22
	I0114 10:06:21.299777   10484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:06:21.391794   10484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:06:21.318668723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:06:21.391897   10484 docker.go:255] overlay module found
	I0114 10:06:21.393977   10484 out.go:97] Using the docker driver based on existing profile
	I0114 10:06:21.394001   10484 start.go:294] selected driver: docker
	I0114 10:06:21.394006   10484 start.go:838] validating driver "docker" against &{Name:download-only-100553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100553 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:21.394167   10484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:06:21.495027   10484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-14 10:06:21.412667157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:06:21.495557   10484 cni.go:95] Creating CNI manager for ""
	I0114 10:06:21.495572   10484 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0114 10:06:21.495585   10484 start_flags.go:319] config:
	{Name:download-only-100553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-100553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:21.497442   10484 out.go:97] Starting control plane node download-only-100553 in cluster download-only-100553
	I0114 10:06:21.497462   10484 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0114 10:06:21.498761   10484 out.go:97] Pulling base image ...
	I0114 10:06:21.498796   10484 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:06:21.498847   10484 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:06:21.518682   10484 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 10:06:21.518908   10484 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0114 10:06:21.518926   10484 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory, skipping pull
	I0114 10:06:21.518931   10484 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in cache, skipping pull
	I0114 10:06:21.518945   10484 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c as a tarball
	I0114 10:06:21.761422   10484 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:06:21.761450   10484 cache.go:57] Caching tarball of preloaded images
	I0114 10:06:21.761670   10484 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:06:21.763864   10484 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0114 10:06:21.763885   10484 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:06:21.860745   10484 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15642-3818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100553"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-100553
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnlyKic (5.33s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-100645 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-100645 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (3.862501221s)
helpers_test.go:175: Cleaning up "download-docker-100645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-100645
--- PASS: TestDownloadOnlyKic (5.33s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-100650 --alsologtostderr --binary-mirror http://127.0.0.1:35303 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-100650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-100650
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (59.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-104504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-104504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (56.700469844s)
helpers_test.go:175: Cleaning up "offline-containerd-104504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-104504

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-104504: (2.72451092s)
--- PASS: TestOffline (59.43s)

                                                
                                    
x
+
TestAddons/Setup (102.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p addons-100651 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p addons-100651 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m42.254494555s)
--- PASS: TestAddons/Setup (102.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 8.474417ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-fzvtj" [a78b6c79-2eb6-474a-b8d8-9262947843a1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007492362s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-7k2hr" [cae75e77-cf75-46cf-b64b-d96fe1e48781] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008757589s
addons_test.go:297: (dbg) Run:  kubectl --context addons-100651 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-100651 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-100651 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.704429559s)
addons_test.go:316: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-100651 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:189: (dbg) Run:  kubectl --context addons-100651 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context addons-100651 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [3239aeda-1729-4a6b-bb12-b134afb04759] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [3239aeda-1729-4a6b-bb12-b134afb04759] Running
2023/01/14 10:08:53 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.008599825s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context addons-100651 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-100651 addons disable ingress-dns --alsologtostderr -v=1: (1.137796559s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p addons-100651 addons disable ingress --alsologtostderr -v=1: (7.554468399s)
--- PASS: TestAddons/parallel/Ingress (24.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 2.316861ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-wc5xj" [9f9dce17-fcab-40c6-9f4b-f954accfa051] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008542676s
addons_test.go:372: (dbg) Run:  kubectl --context addons-100651 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.46s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 8.350352ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-696b5bfbb7-gjd9m" [1ba03085-b1f2-40c4-93c1-f1b8092759c4] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007940315s
addons_test.go:430: (dbg) Run:  kubectl --context addons-100651 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-100651 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.781444522s)
addons_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 10.309964ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-100651 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100651 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-100651 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [a08f0d1b-cac9-4ac2-bb25-3e558b0f3881] Pending
helpers_test.go:342: "task-pv-pod" [a08f0d1b-cac9-4ac2-bb25-3e558b0f3881] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [a08f0d1b-cac9-4ac2-bb25-3e558b0f3881] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.006461957s
addons_test.go:541: (dbg) Run:  kubectl --context addons-100651 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100651 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100651 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-100651 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:557: (dbg) Run:  kubectl --context addons-100651 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-100651 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100651 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-100651 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [e0561a63-4438-4445-98a0-39642d4df2f1] Pending
helpers_test.go:342: "task-pv-pod-restore" [e0561a63-4438-4445-98a0-39642d4df2f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [e0561a63-4438-4445-98a0-39642d4df2f1] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.008246432s
addons_test.go:583: (dbg) Run:  kubectl --context addons-100651 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Done: kubectl --context addons-100651 delete pod task-pv-pod-restore: (1.201205984s)
addons_test.go:587: (dbg) Run:  kubectl --context addons-100651 delete pvc hpvc-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:591: (dbg) Run:  kubectl --context addons-100651 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-linux-amd64 -p addons-100651 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80180028s)
addons_test.go:599: (dbg) Run:  out/minikube-linux-amd64 -p addons-100651 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-100651 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-100651 --alsologtostderr -v=1: (1.097703277s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-cdcjv" [b95e938d-8e71-41a3-acc6-c8360ba32caa] Pending
helpers_test.go:342: "headlamp-764769c887-cdcjv" [b95e938d-8e71-41a3-acc6-c8360ba32caa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-cdcjv" [b95e938d-8e71-41a3-acc6-c8360ba32caa] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.07608087s
--- PASS: TestAddons/parallel/Headlamp (15.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-x649m" [47b3011a-4fc5-48d9-8cfa-03c66b9123f1] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005424521s
addons_test.go:798: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-100651
--- PASS: TestAddons/parallel/CloudSpanner (5.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-100651 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-100651 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-100651
addons_test.go:139: (dbg) Done: out/minikube-linux-amd64 stop -p addons-100651: (19.989491968s)
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-100651
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-100651
--- PASS: TestAddons/StoppedEnableDisable (20.19s)

                                                
                                    
x
+
TestCertOptions (27.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-104714 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-104714 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.142449788s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-104714 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-104714 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-104714 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-104714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-104714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-104714: (2.809622181s)
--- PASS: TestCertOptions (27.88s)

                                                
                                    
x
+
TestCertExpiration (225.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-104623 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-104623 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.195515945s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-104623 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-104623 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.942780046s)
helpers_test.go:175: Cleaning up "cert-expiration-104623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-104623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-104623: (4.290124441s)
--- PASS: TestCertExpiration (225.43s)

                                                
                                    
x
+
TestForceSystemdFlag (41.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-104632 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-104632 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.900735996s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-104632 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-104632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-104632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-104632: (2.638210465s)
--- PASS: TestForceSystemdFlag (41.90s)

                                                
                                    
x
+
TestForceSystemdEnv (28.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-104603 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-104603 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.747187965s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-104603 ssh "cat /etc/containerd/config.toml"

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:175: Cleaning up "force-systemd-env-104603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-104603

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-104603: (4.642361774s)
--- PASS: TestForceSystemdEnv (28.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.04s)

                                                
                                    
x
+
TestErrorSpam/setup (22.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-101804 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-101804 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-101804 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-101804 --driver=docker  --container-runtime=containerd: (22.611841233s)
--- PASS: TestErrorSpam/setup (22.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 start --dry-run
--- PASS: TestErrorSpam/start (0.92s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 stop: (1.255611315s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 stop
E0114 10:18:33.933480   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:18:33.939580   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101804 --log_dir /tmp/nospam-101804 stop
E0114 10:18:33.950694   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:18:33.970980   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:18:34.011263   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15642-3818/.minikube/files/etc/test/nested/copy/10306/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101838 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0114 10:18:39.054733   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:18:44.175400   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:18:54.415709   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:19:14.896833   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-101838 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (43.052399931s)
--- PASS: TestFunctional/serial/StartWithProxy (43.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101838 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-101838 --alsologtostderr -v=8: (15.312276082s)
functional_test.go:656: soft start took 15.312930727s for "functional-101838" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-101838 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 cache add k8s.gcr.io/pause:3.1: (1.693278737s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 cache add k8s.gcr.io/pause:3.3: (1.628890236s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 cache add k8s.gcr.io/pause:latest: (1.198685137s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-101838 /tmp/TestFunctionalserialCacheCmdcacheadd_local3848556666/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cache add minikube-local-cache-test:functional-101838
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 cache add minikube-local-cache-test:functional-101838: (1.662903971s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cache delete minikube-local-cache-test:functional-101838
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-101838
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (332.395609ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 kubectl -- --context functional-101838 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-101838 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101838 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0114 10:19:55.858061   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-101838 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.835126039s)
functional_test.go:754: restart took 37.835239421s for "functional-101838" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-101838 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 logs: (1.134419134s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 logs --file /tmp/TestFunctionalserialLogsFileCmd2272888131/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 logs --file /tmp/TestFunctionalserialLogsFileCmd2272888131/001/logs.txt: (1.14014695s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 config get cpus: exit status 14 (84.025788ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 config get cpus: exit status 14 (85.406739ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101838 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101838 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 48038: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101838 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-101838 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (220.439362ms)

                                                
                                                
-- stdout --
	* [functional-101838] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:20:46.260556   46975 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:20:46.260680   46975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:20:46.260687   46975 out.go:309] Setting ErrFile to fd 2...
	I0114 10:20:46.260692   46975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:20:46.260796   46975 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:20:46.261304   46975 out.go:303] Setting JSON to false
	I0114 10:20:46.262534   46975 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3794,"bootTime":1673687853,"procs":607,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:20:46.262595   46975 start.go:135] virtualization: kvm guest
	I0114 10:20:46.264715   46975 out.go:177] * [functional-101838] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:20:46.266211   46975 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:20:46.266209   46975 notify.go:220] Checking for updates...
	I0114 10:20:46.267887   46975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:20:46.269425   46975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:20:46.271039   46975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:20:46.272489   46975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:20:46.274590   46975 config.go:180] Loaded profile config "functional-101838": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:20:46.275108   46975 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:20:46.305182   46975 docker.go:138] docker version: linux-20.10.22
	I0114 10:20:46.305262   46975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:20:46.403294   46975 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-14 10:20:46.324332545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:20:46.403388   46975 docker.go:255] overlay module found
	I0114 10:20:46.405822   46975 out.go:177] * Using the docker driver based on existing profile
	I0114 10:20:46.407196   46975 start.go:294] selected driver: docker
	I0114 10:20:46.407209   46975 start.go:838] validating driver "docker" against &{Name:functional-101838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-101838 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:20:46.407353   46975 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:20:46.409854   46975 out.go:177] 
	W0114 10:20:46.411468   46975 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 10:20:46.412801   46975 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101838 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101838 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-101838 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (243.564461ms)

                                                
                                                
-- stdout --
	* [functional-101838] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:20:46.813669   47215 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:20:46.813816   47215 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:20:46.813826   47215 out.go:309] Setting ErrFile to fd 2...
	I0114 10:20:46.813833   47215 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:20:46.814028   47215 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:20:46.814583   47215 out.go:303] Setting JSON to false
	I0114 10:20:46.815869   47215 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3794,"bootTime":1673687853,"procs":609,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:20:46.815931   47215 start.go:135] virtualization: kvm guest
	I0114 10:20:46.818694   47215 out.go:177] * [functional-101838] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0114 10:20:46.820718   47215 notify.go:220] Checking for updates...
	I0114 10:20:46.822315   47215 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:20:46.823922   47215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:20:46.825677   47215 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:20:46.827218   47215 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:20:46.828687   47215 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:20:46.830586   47215 config.go:180] Loaded profile config "functional-101838": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:20:46.830972   47215 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:20:46.860628   47215 docker.go:138] docker version: linux-20.10.22
	I0114 10:20:46.860738   47215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:20:46.966140   47215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2023-01-14 10:20:46.882280151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:20:46.966283   47215 docker.go:255] overlay module found
	I0114 10:20:46.968482   47215 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0114 10:20:46.970123   47215 start.go:294] selected driver: docker
	I0114 10:20:46.970147   47215 start.go:838] validating driver "docker" against &{Name:functional-101838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-101838 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:20:46.970307   47215 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:20:46.973035   47215 out.go:177] 
	W0114 10:20:46.974630   47215 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 10:20:46.976070   47215 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-101838 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-101838 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-4g6tr" [37a4febc-984e-454a-9d2c-c3d4702a0f02] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-4g6tr" [37a4febc-984e-454a-9d2c-c3d4702a0f02] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.013472166s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.49.2:31098
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:31098
--- PASS: TestFunctional/parallel/ServiceCmd (12.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-101838 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-101838 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-7rs57" [10e4c25a-c83d-4685-a707-6fcf7b05fb1d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-7rs57" [10e4c25a-c83d-4685-a707-6fcf7b05fb1d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.007177857s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:31662
functional_test.go:1605: http://192.168.49.2:31662: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-7rs57

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31662
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [61b25840-fc5f-45c2-bbb9-f9bc586f4012] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007092713s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-101838 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-101838 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-101838 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-101838 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [948361fb-4d54-41fe-97f9-6f1c8069b5ac] Pending
helpers_test.go:342: "sp-pod" [948361fb-4d54-41fe-97f9-6f1c8069b5ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [948361fb-4d54-41fe-97f9-6f1c8069b5ac] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006527516s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-101838 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-101838 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-101838 delete -f testdata/storage-provisioner/pod.yaml: (1.031934215s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-101838 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [967629da-aea1-475e-99fd-e1562ce014d5] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [967629da-aea1-475e-99fd-e1562ce014d5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [967629da-aea1-475e-99fd-e1562ce014d5] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.011344611s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-101838 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh -n functional-101838 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 cp functional-101838:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd825966420/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh -n functional-101838 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-101838 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-hkz6f" [d03ca3b2-620a-4977-89e8-46cdb011bcca] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-hkz6f" [d03ca3b2-620a-4977-89e8-46cdb011bcca] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-hkz6f" [d03ca3b2-620a-4977-89e8-46cdb011bcca] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.019675383s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101838 exec mysql-596b7fcdbf-hkz6f -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-101838 exec mysql-596b7fcdbf-hkz6f -- mysql -ppassword -e "show databases;": exit status 1 (167.381456ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101838 exec mysql-596b7fcdbf-hkz6f -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-101838 exec mysql-596b7fcdbf-hkz6f -- mysql -ppassword -e "show databases;": exit status 1 (118.507753ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101838 exec mysql-596b7fcdbf-hkz6f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10306/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /etc/test/nested/copy/10306/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10306.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /etc/ssl/certs/10306.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10306.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /usr/share/ca-certificates/10306.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/103062.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /etc/ssl/certs/103062.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/103062.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /usr/share/ca-certificates/103062.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-101838 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh "sudo systemctl is-active docker": exit status 1 (404.163892ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh "sudo systemctl is-active crio": exit status 1 (378.865352ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101838 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-101838
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-101838
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101838 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
| docker.io/library/minikube-local-cache-test | functional-101838  | sha256:2f68fd | 1.74kB |
| gcr.io/google-containers/addon-resizer      | functional-101838  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| docker.io/library/nginx                     | alpine             | sha256:c433c5 | 16.7MB |
| docker.io/library/nginx                     | latest             | sha256:a99a39 | 56.9MB |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101838 image ls --format json:
[{"id":"sha256:c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":["docker.io/library/nginx@sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16683089"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f
9f498f35b5b56d116e8a7e31dc91"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha
256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":["docker.io/library/nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e"],"repoTags":["docker.io/library/nginx:latest"],"size":"56882371"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-101838"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/
busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"34238163"},{"id":"sha256:2f68fd769ea9889c135900dc232c5e3de6e47a6a20e97cea9ea9b310649870f5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-101838"],"size":"1737"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","rep
oDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"20265805"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101838 image ls --format yaml:
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-101838
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests:
- docker.io/library/nginx@sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6
repoTags:
- docker.io/library/nginx:alpine
size: "16683089"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:2f68fd769ea9889c135900dc232c5e3de6e47a6a20e97cea9ea9b310649870f5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-101838
size: "1737"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests:
- docker.io/library/nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e
repoTags:
- docker.io/library/nginx:latest
size: "56882371"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh pgrep buildkitd: exit status 1 (319.657565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image build -t localhost/my-image:functional-101838 testdata/build
2023/01/14 10:20:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 image build -t localhost/my-image:functional-101838 testdata/build: (3.49114357s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-101838 image build -t localhost/my-image:functional-101838 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:92b69f121d13a29c4a292ea48b646ecbbcd0389a4488f347232d0b98b112ab08 0.0s done
#8 exporting config sha256:1a3f26dd50b549e656f1f78316a56b54a3147187f84a26284033c73e77462824 done
#8 naming to localhost/my-image:functional-101838 done
#8 DONE 0.1s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.781213696s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-101838
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-101838 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-101838 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [dbdd5553-917d-4691-8fde-59c1fe055772] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [dbdd5553-917d-4691-8fde-59c1fe055772] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.012107114s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image load --daemon gcr.io/google-containers/addon-resizer:functional-101838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 image load --daemon gcr.io/google-containers/addon-resizer:functional-101838: (3.567296475s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image load --daemon gcr.io/google-containers/addon-resizer:functional-101838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 image load --daemon gcr.io/google-containers/addon-resizer:functional-101838: (3.562556359s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.988343538s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-101838
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image load --daemon gcr.io/google-containers/addon-resizer:functional-101838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-101838 image load --daemon gcr.io/google-containers/addon-resizer:functional-101838: (3.875380855s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-101838 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.111.102.96 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-101838 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "408.749076ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "101.110449ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "445.819196ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1375: Took "75.431243ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-101838 /tmp/TestFunctionalparallelMountCmdany-port577756070/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673691640174997195" to /tmp/TestFunctionalparallelMountCmdany-port577756070/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673691640174997195" to /tmp/TestFunctionalparallelMountCmdany-port577756070/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673691640174997195" to /tmp/TestFunctionalparallelMountCmdany-port577756070/001/test-1673691640174997195
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.189823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 14 10:20 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 14 10:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 14 10:20 test-1673691640174997195
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh cat /mount-9p/test-1673691640174997195
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-101838 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [5cfbb2c0-b66f-4e40-94a4-17fc42b121f3] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [5cfbb2c0-b66f-4e40-94a4-17fc42b121f3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [5cfbb2c0-b66f-4e40-94a4-17fc42b121f3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [5cfbb2c0-b66f-4e40-94a4-17fc42b121f3] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [5cfbb2c0-b66f-4e40-94a4-17fc42b121f3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.007012479s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-101838 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-101838 /tmp/TestFunctionalparallelMountCmdany-port577756070/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image save gcr.io/google-containers/addon-resizer:functional-101838 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image rm gcr.io/google-containers/addon-resizer:functional-101838
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-101838
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 image save --daemon gcr.io/google-containers/addon-resizer:functional-101838
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-101838
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-101838 /tmp/TestFunctionalparallelMountCmdspecific-port2926742506/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.944927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-101838 /tmp/TestFunctionalparallelMountCmdspecific-port2926742506/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-101838 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101838 ssh "sudo umount -f /mount-9p": exit status 1 (366.321496ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-101838 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-101838 /tmp/TestFunctionalparallelMountCmdspecific-port2926742506/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-101838
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-101838
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-101838
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (86.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-102116 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0114 10:21:17.779011   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-102116 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m26.218985193s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (86.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons enable ingress --alsologtostderr -v=5: (9.654744066s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-102116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-102116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.262664928s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-102116 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-102116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [4758d3af-6407-4e86-8010-b8a24ef961ad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [4758d3af-6407-4e86-8010-b8a24ef961ad] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005258495s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-102116 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102116 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons disable ingress-dns --alsologtostderr -v=1: (7.587737883s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons disable ingress --alsologtostderr -v=1
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102116 addons disable ingress --alsologtostderr -v=1: (7.234726032s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.30s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-102334 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0114 10:24:01.619895   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-102334 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (43.672642963s)
--- PASS: TestJSONOutput/start/Command (43.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-102334 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-102334 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-102334 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-102334 --output=json --user=testUser: (5.76788024s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-102429 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-102429 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.154884ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"009de8d3-7a07-4080-9fd0-7743dfae2e5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102429] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3a61875-c6d7-4a5d-a4cb-746a9a50c77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"73ddaecf-45e6-49c5-862a-0cadd4dad426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2486d3e1-81e3-45c2-ad74-916792932a75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig"}}
	{"specversion":"1.0","id":"380882ec-5b88-4327-ac9a-dcfc84d1c486","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube"}}
	{"specversion":"1.0","id":"9d19bc56-70f7-4cd8-8d70-f67e7026316f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"77da730b-895b-4e95-bb7a-920230e81772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-102429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-102429
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-102430 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-102430 --network=: (40.544789997s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-102430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-102430
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-102430: (2.17420309s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-102512 --network=bridge
E0114 10:25:27.807855   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:27.813117   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:27.823401   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:27.843700   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:27.884015   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:27.964342   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:28.124766   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:28.445425   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:29.086127   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:30.366604   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 10:25:32.928437   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-102512 --network=bridge: (25.050522287s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-102512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-102512
E0114 10:25:38.048857   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-102512: (1.958417377s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.03s)

                                                
                                    
x
+
TestKicExistingNetwork (26.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-102539 --network=existing-network
E0114 10:25:48.289349   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-102539 --network=existing-network: (24.71511701s)
helpers_test.go:175: Cleaning up "existing-network-102539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-102539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-102539: (1.998843104s)
--- PASS: TestKicExistingNetwork (26.88s)

                                                
                                    
x
+
TestKicCustomSubnet (28.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-102606 --subnet=192.168.60.0/24
E0114 10:26:08.770286   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-102606 --subnet=192.168.60.0/24: (26.036511689s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-102606 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-102606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-102606
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-102606: (2.185088411s)
--- PASS: TestKicCustomSubnet (28.25s)

                                                
                                    
x
+
TestKicStaticIP (28.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-102635 --static-ip=192.168.200.200
E0114 10:26:49.731413   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-102635 --static-ip=192.168.200.200: (26.105039815s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-102635 ip
helpers_test.go:175: Cleaning up "static-ip-102635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-102635
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-102635: (2.072535282s)
--- PASS: TestKicStaticIP (28.38s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (53.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-102703 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-102703 --driver=docker  --container-runtime=containerd: (22.381215169s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-102703 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-102703 --driver=docker  --container-runtime=containerd: (25.691881711s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-102703
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-102703
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-102703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-102703
E0114 10:27:53.110911   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.116188   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.126472   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.146735   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.187009   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.267372   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.427799   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:53.748345   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:27:54.389375   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-102703: (1.909122699s)
helpers_test.go:175: Cleaning up "first-102703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-102703
E0114 10:27:55.670368   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-102703: (2.211645021s)
--- PASS: TestMinikubeProfile (53.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-102756 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0114 10:27:58.230774   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-102756 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.094079228s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-102756 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102756 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0114 10:28:03.350994   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102756 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.957248994s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102756 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-102756 --alsologtostderr -v=5
E0114 10:28:11.652270   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-102756 --alsologtostderr -v=5: (1.688953625s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102756 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-102756
E0114 10:28:13.591528   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-102756: (1.239063395s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102756
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102756: (5.748546814s)
--- PASS: TestMountStart/serial/RestartStopped (6.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102756 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (87.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102822 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0114 10:28:33.934086   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:28:34.072282   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:29:15.032752   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102822 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m27.417484942s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (87.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-102822 -- rollout status deployment/busybox: (3.431864155s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-2hdwz -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-jth2v -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-2hdwz -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-jth2v -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-2hdwz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-jth2v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-2hdwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-2hdwz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-jth2v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102822 -- exec busybox-65db55d5d6-jth2v -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-102822 -v 3 --alsologtostderr
E0114 10:30:27.807626   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-102822 -v 3 --alsologtostderr: (32.50436616s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp testdata/cp-test.txt multinode-102822:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822:/home/docker/cp-test.txt multinode-102822-m02:/home/docker/cp-test_multinode-102822_multinode-102822-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m02 "sudo cat /home/docker/cp-test_multinode-102822_multinode-102822-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822:/home/docker/cp-test.txt multinode-102822-m03:/home/docker/cp-test_multinode-102822_multinode-102822-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m03 "sudo cat /home/docker/cp-test_multinode-102822_multinode-102822-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp testdata/cp-test.txt multinode-102822-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt multinode-102822:/home/docker/cp-test_multinode-102822-m02_multinode-102822.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822 "sudo cat /home/docker/cp-test_multinode-102822-m02_multinode-102822.txt"
E0114 10:30:36.953596   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822-m02:/home/docker/cp-test.txt multinode-102822-m03:/home/docker/cp-test_multinode-102822-m02_multinode-102822-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m03 "sudo cat /home/docker/cp-test_multinode-102822-m02_multinode-102822-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp testdata/cp-test.txt multinode-102822-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1355040214/001/cp-test_multinode-102822-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt multinode-102822:/home/docker/cp-test_multinode-102822-m03_multinode-102822.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822 "sudo cat /home/docker/cp-test_multinode-102822-m03_multinode-102822.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 cp multinode-102822-m03:/home/docker/cp-test.txt multinode-102822-m02:/home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 ssh -n multinode-102822-m02 "sudo cat /home/docker/cp-test_multinode-102822-m03_multinode-102822-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-102822 node stop m03: (1.24049056s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102822 status: exit status 7 (554.690438ms)

                                                
                                                
-- stdout --
	multinode-102822
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-102822-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-102822-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr: exit status 7 (542.840684ms)

                                                
                                                
-- stdout --
	multinode-102822
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-102822-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-102822-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:30:43.724966  107205 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:30:43.725113  107205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:30:43.725125  107205 out.go:309] Setting ErrFile to fd 2...
	I0114 10:30:43.725131  107205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:30:43.725572  107205 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:30:43.725768  107205 out.go:303] Setting JSON to false
	I0114 10:30:43.725792  107205 mustload.go:65] Loading cluster: multinode-102822
	I0114 10:30:43.725900  107205 notify.go:220] Checking for updates...
	I0114 10:30:43.726156  107205 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:30:43.726184  107205 status.go:255] checking status of multinode-102822 ...
	I0114 10:30:43.726523  107205 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:30:43.750817  107205 status.go:330] multinode-102822 host status = "Running" (err=<nil>)
	I0114 10:30:43.750846  107205 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:30:43.751054  107205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822
	I0114 10:30:43.773820  107205 host.go:66] Checking if "multinode-102822" exists ...
	I0114 10:30:43.774061  107205 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:30:43.774103  107205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822
	I0114 10:30:43.797884  107205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822/id_rsa Username:docker}
	I0114 10:30:43.880457  107205 ssh_runner.go:195] Run: systemctl --version
	I0114 10:30:43.884241  107205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:30:43.893534  107205 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:30:43.988525  107205 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-14 10:30:43.913732649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:30:43.989111  107205 kubeconfig.go:92] found "multinode-102822" server: "https://192.168.58.2:8443"
	I0114 10:30:43.989137  107205 api_server.go:165] Checking apiserver status ...
	I0114 10:30:43.989181  107205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:30:43.998379  107205 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1213/cgroup
	I0114 10:30:44.005552  107205 api_server.go:181] apiserver freezer: "3:freezer:/docker/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/kubepods/burstable/poda912ee0a84c59151c3514b96c1018750/9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22"
	I0114 10:30:44.005615  107205 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6f4311dd36b9f3df67d726a2c8eefe9e59e0fb7c41cd2b6428ca3b3cd8152fcd/kubepods/burstable/poda912ee0a84c59151c3514b96c1018750/9a1ebe17670ca0100ec7926aa598af2f64cddc4094cd4398a647c15c87972a22/freezer.state
	I0114 10:30:44.012090  107205 api_server.go:203] freezer state: "THAWED"
	I0114 10:30:44.012121  107205 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0114 10:30:44.016528  107205 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0114 10:30:44.016553  107205 status.go:421] multinode-102822 apiserver status = Running (err=<nil>)
	I0114 10:30:44.016565  107205 status.go:257] multinode-102822 status: &{Name:multinode-102822 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:30:44.016586  107205 status.go:255] checking status of multinode-102822-m02 ...
	I0114 10:30:44.016826  107205 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:30:44.039919  107205 status.go:330] multinode-102822-m02 host status = "Running" (err=<nil>)
	I0114 10:30:44.039950  107205 host.go:66] Checking if "multinode-102822-m02" exists ...
	I0114 10:30:44.040223  107205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102822-m02
	I0114 10:30:44.063153  107205 host.go:66] Checking if "multinode-102822-m02" exists ...
	I0114 10:30:44.063391  107205 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:30:44.063425  107205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102822-m02
	I0114 10:30:44.087515  107205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15642-3818/.minikube/machines/multinode-102822-m02/id_rsa Username:docker}
	I0114 10:30:44.168019  107205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:30:44.176927  107205 status.go:257] multinode-102822-m02 status: &{Name:multinode-102822-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:30:44.176956  107205 status.go:255] checking status of multinode-102822-m03 ...
	I0114 10:30:44.177182  107205 cli_runner.go:164] Run: docker container inspect multinode-102822-m03 --format={{.State.Status}}
	I0114 10:30:44.200942  107205 status.go:330] multinode-102822-m03 host status = "Stopped" (err=<nil>)
	I0114 10:30:44.200970  107205 status.go:343] host is not running, skipping remaining checks
	I0114 10:30:44.200976  107205 status.go:257] multinode-102822-m03 status: &{Name:multinode-102822-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 node start m03 --alsologtostderr
E0114 10:30:55.493150   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-102822 node start m03 --alsologtostderr: (30.13421908s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-102822 stop: (21.150262995s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102822 status: exit status 7 (120.681463ms)

                                                
                                                
-- stdout --
	multinode-102822
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-102822-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr: exit status 7 (113.946266ms)

                                                
                                                
-- stdout --
	multinode-102822
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-102822-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:34:43.160706  117745 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:34:43.160904  117745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:34:43.160918  117745 out.go:309] Setting ErrFile to fd 2...
	I0114 10:34:43.160926  117745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:34:43.161065  117745 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:34:43.161288  117745 out.go:303] Setting JSON to false
	I0114 10:34:43.161316  117745 mustload.go:65] Loading cluster: multinode-102822
	I0114 10:34:43.161349  117745 notify.go:220] Checking for updates...
	I0114 10:34:43.161681  117745 config.go:180] Loaded profile config "multinode-102822": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:34:43.161699  117745 status.go:255] checking status of multinode-102822 ...
	I0114 10:34:43.162156  117745 cli_runner.go:164] Run: docker container inspect multinode-102822 --format={{.State.Status}}
	I0114 10:34:43.184482  117745 status.go:330] multinode-102822 host status = "Stopped" (err=<nil>)
	I0114 10:34:43.184504  117745 status.go:343] host is not running, skipping remaining checks
	I0114 10:34:43.184509  117745 status.go:257] multinode-102822 status: &{Name:multinode-102822 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:34:43.184529  117745 status.go:255] checking status of multinode-102822-m02 ...
	I0114 10:34:43.184750  117745 cli_runner.go:164] Run: docker container inspect multinode-102822-m02 --format={{.State.Status}}
	I0114 10:34:43.207124  117745 status.go:330] multinode-102822-m02 host status = "Stopped" (err=<nil>)
	I0114 10:34:43.207148  117745 status.go:343] host is not running, skipping remaining checks
	I0114 10:34:43.207155  117745 status.go:257] multinode-102822-m02 status: &{Name:multinode-102822-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (101.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102822 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0114 10:34:56.980562   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:35:27.808076   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102822 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m41.20699122s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102822 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (101.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-102822
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102822-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-102822-m02 --driver=docker  --container-runtime=containerd: exit status 14 (92.581736ms)

                                                
                                                
-- stdout --
	* [multinode-102822-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-102822-m02' is duplicated with machine name 'multinode-102822-m02' in profile 'multinode-102822'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102822-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102822-m03 --driver=docker  --container-runtime=containerd: (23.032565462s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-102822
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-102822: exit status 80 (343.464961ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-102822
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-102822-m03 already exists in multinode-102822-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-102822-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-102822-m03: (1.940505083s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.48s)

                                                
                                    
x
+
TestScheduledStopUnix (101.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-104308 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-104308 --memory=2048 --driver=docker  --container-runtime=containerd: (24.573770825s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-104308 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-104308 -n scheduled-stop-104308
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-104308 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-104308 --cancel-scheduled
E0114 10:43:33.933697   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-104308 -n scheduled-stop-104308
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-104308
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-104308 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0114 10:44:16.156700   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-104308
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-104308: exit status 7 (92.511352ms)

                                                
                                                
-- stdout --
	scheduled-stop-104308
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-104308 -n scheduled-stop-104308
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-104308 -n scheduled-stop-104308: exit status 7 (92.721808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-104308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-104308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-104308: (4.932255722s)
--- PASS: TestScheduledStopUnix (101.22s)

                                                
                                    
x
+
TestInsufficientStorage (15.13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-104449 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-104449 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.580497282s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0dc57465-e03c-4370-bada-381c6acbd2b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-104449] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d667655b-4d9b-4de1-9d46-f707b65023fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"37ecdcff-fa04-46ae-83e8-dd3b3e1b74c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"29cdedd8-92aa-4acd-8355-c33ceb982272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig"}}
	{"specversion":"1.0","id":"cbe3e4b5-2399-4e77-bcd1-c8c70ddc36a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube"}}
	{"specversion":"1.0","id":"3e19f100-a857-47e4-9274-e17f4074e880","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b807af7c-f770-4233-ba54-0e68f283208d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"94668ab2-50e2-4e03-9d98-a64cd6e543a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"06722479-f631-49b2-85ee-aa713c1fbc3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7aa35697-0cf8-4436-a5a6-7f05748eda29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8b96a5ec-e719-4ebf-842f-382902310037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-104449 in cluster insufficient-storage-104449","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e088c6d9-a6a2-4272-9767-976586c72a42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2e68803-dfec-4dc6-9b59-edbb1802f2ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3625fc83-4569-4468-9c96-78dcd095d7d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-104449 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-104449 --output=json --layout=cluster: exit status 7 (330.599865ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-104449","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-104449","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 10:44:58.246413  141465 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-104449" does not appear in /home/jenkins/minikube-integration/15642-3818/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-104449 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-104449 --output=json --layout=cluster: exit status 7 (333.164302ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-104449","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-104449","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 10:44:58.580057  141573 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-104449" does not appear in /home/jenkins/minikube-integration/15642-3818/kubeconfig
	E0114 10:44:58.587986  141573 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/insufficient-storage-104449/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-104449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-104449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-104449: (5.888769323s)
--- PASS: TestInsufficientStorage (15.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1678063012.exe start -p running-upgrade-104635 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1678063012.exe start -p running-upgrade-104635 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.820088881s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-104635 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-104635 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.273963051s)
helpers_test.go:175: Cleaning up "running-upgrade-104635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-104635
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-104635: (2.738247881s)
--- PASS: TestRunningBinaryUpgrade (91.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (131.03s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.495297956.exe start -p missing-upgrade-104736 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.495297956.exe start -p missing-upgrade-104736 --memory=2200 --driver=docker  --container-runtime=containerd: (1m14.657332561s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-104736
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-104736: (11.747258604s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-104736
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-104736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-104736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.152908585s)
helpers_test.go:175: Cleaning up "missing-upgrade-104736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-104736
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-104736: (2.636208837s)
--- PASS: TestMissingContainerUpgrade (131.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104504 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-104504 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (116.80622ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-104504] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestPause/serial/Start (67.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-104504 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-104504 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m7.443265544s)
--- PASS: TestPause/serial/Start (67.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104504 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104504 --driver=docker  --container-runtime=containerd: (34.06881283s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-104504 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (147.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1677962363.exe start -p stopped-upgrade-104504 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0114 10:45:27.807899   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1677962363.exe start -p stopped-upgrade-104504 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m14.626632848s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1677962363.exe -p stopped-upgrade-104504 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1677962363.exe -p stopped-upgrade-104504 stop: (1.244579368s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-104504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-104504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.208269035s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (147.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104504 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104504 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.657572257s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-104504 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-104504 status -o json: exit status 2 (386.094627ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-104504","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-104504
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-104504: (3.066137596s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104504 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104504 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.12270544s)
--- PASS: TestNoKubernetes/serial/Start (5.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-104504 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-104504 "sudo systemctl is-active --quiet service kubelet": exit status 1 (381.489193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-104504
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-104504: (1.452502553s)
--- PASS: TestNoKubernetes/serial/Stop (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104504 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104504 --driver=docker  --container-runtime=containerd: (6.323099126s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-104504 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-104504 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.037336791s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-104504 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-104504 "sudo systemctl is-active --quiet service kubelet": exit status 1 (431.743819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-104615 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-104615 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (237.211847ms)

                                                
                                                
-- stdout --
	* [false-104615] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:46:15.908969  161137 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:46:15.909096  161137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:46:15.909108  161137 out.go:309] Setting ErrFile to fd 2...
	I0114 10:46:15.909115  161137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:46:15.909282  161137 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3818/.minikube/bin
	I0114 10:46:15.910004  161137 out.go:303] Setting JSON to false
	I0114 10:46:15.911836  161137 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5323,"bootTime":1673687853,"procs":623,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:46:15.911936  161137 start.go:135] virtualization: kvm guest
	I0114 10:46:15.914841  161137 out.go:177] * [false-104615] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:46:15.917007  161137 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:46:15.917008  161137 notify.go:220] Checking for updates...
	I0114 10:46:15.918654  161137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:46:15.920213  161137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3818/kubeconfig
	I0114 10:46:15.921724  161137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3818/.minikube
	I0114 10:46:15.923112  161137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:46:15.925097  161137 config.go:180] Loaded profile config "force-systemd-env-104603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:46:15.925203  161137 config.go:180] Loaded profile config "pause-104504": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:46:15.925264  161137 config.go:180] Loaded profile config "stopped-upgrade-104504": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0114 10:46:15.925321  161137 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:46:15.961351  161137 docker.go:138] docker version: linux-20.10.22
	I0114 10:46:15.961480  161137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:46:16.062338  161137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:60 SystemTime:2023-01-14 10:46:15.983739953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0114 10:46:16.062434  161137 docker.go:255] overlay module found
	I0114 10:46:16.064780  161137 out.go:177] * Using the docker driver based on user configuration
	I0114 10:46:16.066463  161137 start.go:294] selected driver: docker
	I0114 10:46:16.066482  161137 start.go:838] validating driver "docker" against <nil>
	I0114 10:46:16.066505  161137 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:46:16.068876  161137 out.go:177] 
	W0114 10:46:16.070340  161137 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0114 10:46:16.071730  161137 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-104615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-104615
--- PASS: TestNetworkPlugins/group/false (0.48s)

                                                
                                    
x
+
TestPause/serial/Pause (1.3s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-104504 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-104504 --alsologtostderr -v=5: (1.296810836s)
--- PASS: TestPause/serial/Pause (1.30s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-104504 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-104504 --output=json --layout=cluster: exit status 2 (370.355534ms)

                                                
                                                
-- stdout --
	{"Name":"pause-104504","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-104504","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-104504 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.19s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-104504 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-104504 --alsologtostderr -v=5: (1.192262152s)
--- PASS: TestPause/serial/PauseAgain (1.19s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-104504 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-104504 --alsologtostderr -v=5: (2.881277505s)
--- PASS: TestPause/serial/DeletePaused (2.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-104504
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-104504: exit status 1 (23.716132ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-104504

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-104504
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-104807 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0114 10:48:33.933745   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-104807 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m4.627361726s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-104947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-104947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (51.126091024s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-105009 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-105009 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (45.36303572s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-104807 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [61d351e2-520f-41f0-8616-17109432a290] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [61d351e2-520f-41f0-8616-17109432a290] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.012005292s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-104807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-104807 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-104807 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-104807 --alsologtostderr -v=3
E0114 10:50:27.807442   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-104807 --alsologtostderr -v=3: (20.117794146s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-104947 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [01e5c876-13f2-429f-a284-de095e5bc5cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [01e5c876-13f2-429f-a284-de095e5bc5cd] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012031346s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-104947 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104807 -n old-k8s-version-104807
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104807 -n old-k8s-version-104807: exit status 7 (108.21426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-104807 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (431.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-104807 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-104807 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m11.53867712s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104807 -n old-k8s-version-104807
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (431.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-104947 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-104947 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-104947 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-104947 --alsologtostderr -v=3: (20.077829932s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-105009 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [54b2810c-811c-4d24-af89-cf8d67a6b570] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [54b2810c-811c-4d24-af89-cf8d67a6b570] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.010675101s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-105009 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-105009 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-105009 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-105009 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-105009 --alsologtostderr -v=3: (20.121710808s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-104947 -n no-preload-104947
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-104947 -n no-preload-104947: exit status 7 (126.29531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-104947 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (314.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-104947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-104947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m13.93777675s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-104947 -n no-preload-104947
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (314.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-105009 -n embed-certs-105009
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-105009 -n embed-certs-105009: exit status 7 (105.092827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-105009 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (313.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-105009 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 10:51:36.981640   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:52:53.110123   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory
E0114 10:53:33.933991   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/addons-100651/client.crt: no such file or directory
E0114 10:55:27.807798   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-105009 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m12.893347852s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-105009 -n embed-certs-105009
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (313.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-b8mwn" [7d3e3bde-4dd2-42a7-abe7-d56e4efb4134] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-b8mwn" [7d3e3bde-4dd2-42a7-abe7-d56e4efb4134] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.013842917s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-b8mwn" [7d3e3bde-4dd2-42a7-abe7-d56e4efb4134] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007098743s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-104947 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-104947 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-104947 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-104947 -n no-preload-104947
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-104947 -n no-preload-104947: exit status 2 (424.944479ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-104947 -n no-preload-104947
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-104947 -n no-preload-104947: exit status 2 (428.335546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-104947 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-104947 -n no-preload-104947

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-104947 -n no-preload-104947
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-c8742" [2e6a3bcf-ffb6-48b1-b93e-3aac99dfd48a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-c8742" [2e6a3bcf-ffb6-48b1-b93e-3aac99dfd48a] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.022935815s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-105641 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-105641 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (50.510887656s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-c8742" [2e6a3bcf-ffb6-48b1-b93e-3aac99dfd48a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007329972s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-105009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-105009 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-105009 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-105009 -n embed-certs-105009
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-105009 -n embed-certs-105009: exit status 2 (446.280334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-105009 -n embed-certs-105009
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-105009 -n embed-certs-105009: exit status 2 (451.735241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-105009 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-105009 -n embed-certs-105009
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-105009 -n embed-certs-105009
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-105657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-105657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (36.94900099s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (48.396036431s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-105641 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ef50b875-9317-4797-b1f5-8ebb1e5cc0ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:342: "busybox" [ef50b875-9317-4797-b1f5-8ebb1e5cc0ea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.012338737s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-105641 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-105657 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-105657 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-105657 --alsologtostderr -v=3: (1.289205494s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-105657 -n newest-cni-105657
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-105657 -n newest-cni-105657: exit status 7 (105.527188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-105657 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-105657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-105657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (29.671484616s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-105657 -n newest-cni-105657
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-105641 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-105641 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (20.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-105641 --alsologtostderr -v=3
E0114 10:57:53.110285   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/ingress-addon-legacy-102116/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-105641 --alsologtostderr -v=3: (20.208859887s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (20.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-cvtmb" [eda00f03-7044-4771-b07e-1390ab003b91] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012332293s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-cvtmb" [eda00f03-7044-4771-b07e-1390ab003b91] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006318825s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-104807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641: exit status 7 (121.541067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-105641 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (563.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-105641 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-105641 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (9m23.011951352s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (563.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-104807 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-104807 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104807 -n old-k8s-version-104807

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104807 -n old-k8s-version-104807: exit status 2 (385.629937ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-104807 -n old-k8s-version-104807

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-104807 -n old-k8s-version-104807: exit status 2 (388.063564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-104807 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104807 -n old-k8s-version-104807

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-104807 -n old-k8s-version-104807
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-105657 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-105657 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-105657 -n newest-cni-105657

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-105657 -n newest-cni-105657: exit status 2 (451.416582ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-105657 -n newest-cni-105657

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-105657 -n newest-cni-105657: exit status 2 (442.931333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-105657 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-105657 -n newest-cni-105657
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-105657 -n newest-cni-105657
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-104615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (47.183861272s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-104615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-pzfvh" [2e48d87b-f9ab-4472-bbb0-74aa5c43e64d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-pzfvh" [2e48d87b-f9ab-4472-bbb0-74aa5c43e64d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006172042s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (92.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-104616 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-104616 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m32.201451622s)
--- PASS: TestNetworkPlugins/group/cilium/Start (92.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-104615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-104615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-104615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-q5q2g" [01ffd4d5-390d-4675-9313-6318fa7679ce] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013430249s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-104615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-104615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-nvm2h" [13ecc5b7-de66-4a97-b2fe-eb3fd739f3af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-nvm2h" [13ecc5b7-de66-4a97-b2fe-eb3fd739f3af] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005437343s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-104615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-104615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-104615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (297.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (4m57.44464792s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (297.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-9cjl6" [96638cf7-45aa-455a-8fab-f9b1f0e89f19] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015773782s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-104616 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-104616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-lfwsg" [fbc03a09-3443-42ee-a39a-f324ed46c2ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-lfwsg" [fbc03a09-3443-42ee-a39a-f324ed46c2ee] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006304978s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-104616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-104616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-104616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0114 11:00:11.966229   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:11.971764   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:11.982095   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:12.002417   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:12.042701   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:12.124390   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:12.285086   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:12.606024   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:13.247008   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:14.527303   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:17.087497   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:22.207773   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:27.808411   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/functional-101838/client.crt: no such file or directory
E0114 11:00:32.448877   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
E0114 11:00:39.104266   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.109536   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.119782   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.140079   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.180374   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.260728   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.421140   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:39.741732   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:40.382422   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:41.662602   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-104615 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (39.350316122s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-104615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-104615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-mnvtz" [2e9c34e5-b899-4f7e-aac8-669278305cd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 11:00:44.223501   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-mnvtz" [2e9c34e5-b899-4f7e-aac8-669278305cd8] Running
E0114 11:00:49.344008   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/no-preload-104947/client.crt: no such file or directory
E0114 11:00:52.929987   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/old-k8s-version-104807/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005855543s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-104615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-104615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-pppm5" [88b282b6-3801-4c78-a415-11ef0f945363] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 11:04:19.021092   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/kindnet-104615/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-pppm5" [88b282b6-3801-4c78-a415-11ef0f945363] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005788021s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-95ht6" [280f05d7-d8fb-4fb0-a025-c8d32189d625] Running
E0114 11:07:28.592417   10306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-3818/.minikube/profiles/cilium-104616/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-95ht6" [280f05d7-d8fb-4fb0-a025-c8d32189d625] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011197579s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-95ht6" [280f05d7-d8fb-4fb0-a025-c8d32189d625] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005555929s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-105641 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-105641 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-105641 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641: exit status 2 (360.607306ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641: exit status 2 (360.766664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-105641 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-105641 -n default-k8s-diff-port-105641
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-105641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-105641
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-104615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-104615
--- SKIP: TestNetworkPlugins/group/kubenet (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-104615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-104615
--- SKIP: TestNetworkPlugins/group/flannel (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-104616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-104616
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.25s)

                                                
                                    
Copied to clipboard