Test Report: KVM_Linux 15909

                    
                      468919b2fcd0c7cf0d4c8e9733c4c1a0b87a5208:2023-02-23:28038
                    
                

Test fail (2/306)

Order failed test Duration
203 TestMultiNode/serial/RestartKeepsNodes 114.14
204 TestMultiNode/serial/DeleteNode 3.31
x
+
TestMultiNode/serial/RestartKeepsNodes (114.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-773885
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-773885
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-773885: (28.494785999s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773885 --wait=true -v=8 --alsologtostderr
E0223 22:21:14.560588   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:21:42.244313   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:21:48.831678   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:22:35.338149   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-773885 --wait=true -v=8 --alsologtostderr: exit status 90 (1m23.230284364s)

                                                
                                                
-- stdout --
	* [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-773885 in cluster multinode-773885
	* Restarting existing kvm2 VM for "multinode-773885" ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-773885-m02 in cluster multinode-773885
	* Restarting existing kvm2 VM for "multinode-773885-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.240
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:21:13.262206   80620 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:21:13.262485   80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:21:13.262530   80620 out.go:309] Setting ErrFile to fd 2...
	I0223 22:21:13.262547   80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:21:13.263007   80620 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:21:13.263577   80620 out.go:303] Setting JSON to false
	I0223 22:21:13.264336   80620 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7426,"bootTime":1677183448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:21:13.264396   80620 start.go:135] virtualization: kvm guest
	I0223 22:21:13.267622   80620 out.go:177] * [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:21:13.268914   80620 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:21:13.268968   80620 notify.go:220] Checking for updates...
	I0223 22:21:13.270444   80620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:21:13.271889   80620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:13.273288   80620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 22:21:13.274630   80620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:21:13.275971   80620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:21:13.277689   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:13.277751   80620 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:21:13.278270   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:21:13.278328   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:21:13.292096   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0223 22:21:13.292502   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:21:13.293077   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:21:13.293100   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:21:13.293421   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:21:13.293604   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:13.326142   80620 out.go:177] * Using the kvm2 driver based on existing profile
	I0223 22:21:13.327601   80620 start.go:296] selected driver: kvm2
	I0223 22:21:13.327615   80620 start.go:857] validating driver "kvm2" against &{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP:}
	I0223 22:21:13.327745   80620 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:21:13.327989   80620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:21:13.328051   80620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-59858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 22:21:13.341443   80620 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 22:21:13.342073   80620 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 22:21:13.342106   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:13.342116   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:13.342128   80620 start_flags.go:319] config:
	{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:21:13.342256   80620 iso.go:125] acquiring lock: {Name:mka4f25d544a3ff8c2a2fab814177dd4b23f9fc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:21:13.344079   80620 out.go:177] * Starting control plane node multinode-773885 in cluster multinode-773885
	I0223 22:21:13.345362   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:13.345394   80620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 22:21:13.345409   80620 cache.go:57] Caching tarball of preloaded images
	I0223 22:21:13.345481   80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:21:13.345493   80620 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:21:13.345663   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:21:13.345836   80620 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:21:13.345858   80620 start.go:364] acquiring machines lock for multinode-773885: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 22:21:13.345897   80620 start.go:368] acquired machines lock for "multinode-773885" in 21.539µs
	I0223 22:21:13.345910   80620 start.go:96] Skipping create...Using existing machine configuration
	I0223 22:21:13.345916   80620 fix.go:55] fixHost starting: 
	I0223 22:21:13.346182   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:21:13.346210   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:21:13.358898   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0223 22:21:13.359326   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:21:13.359874   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:21:13.359895   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:21:13.360176   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:21:13.360338   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:13.360464   80620 main.go:141] libmachine: (multinode-773885) Calling .GetState
	I0223 22:21:13.361968   80620 fix.go:103] recreateIfNeeded on multinode-773885: state=Stopped err=<nil>
	I0223 22:21:13.361991   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	W0223 22:21:13.362122   80620 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 22:21:13.364431   80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885" ...
	I0223 22:21:13.365638   80620 main.go:141] libmachine: (multinode-773885) Calling .Start
	I0223 22:21:13.365789   80620 main.go:141] libmachine: (multinode-773885) Ensuring networks are active...
	I0223 22:21:13.366413   80620 main.go:141] libmachine: (multinode-773885) Ensuring network default is active
	I0223 22:21:13.366726   80620 main.go:141] libmachine: (multinode-773885) Ensuring network mk-multinode-773885 is active
	I0223 22:21:13.367088   80620 main.go:141] libmachine: (multinode-773885) Getting domain xml...
	I0223 22:21:13.367766   80620 main.go:141] libmachine: (multinode-773885) Creating domain...
	I0223 22:21:14.564410   80620 main.go:141] libmachine: (multinode-773885) Waiting to get IP...
	I0223 22:21:14.565318   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:14.565709   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:14.565811   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.565729   80650 retry.go:31] will retry after 216.926568ms: waiting for machine to come up
	I0223 22:21:14.784224   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:14.784682   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:14.784711   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.784633   80650 retry.go:31] will retry after 249.246042ms: waiting for machine to come up
	I0223 22:21:15.035098   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.035423   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.035451   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.035397   80650 retry.go:31] will retry after 334.153469ms: waiting for machine to come up
	I0223 22:21:15.370820   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.371326   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.371360   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.371252   80650 retry.go:31] will retry after 394.396319ms: waiting for machine to come up
	I0223 22:21:15.766773   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.767259   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.767292   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.767204   80650 retry.go:31] will retry after 580.71112ms: waiting for machine to come up
	I0223 22:21:16.350049   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:16.350438   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:16.350468   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:16.350387   80650 retry.go:31] will retry after 812.475241ms: waiting for machine to come up
	I0223 22:21:17.164302   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:17.164761   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:17.164794   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:17.164713   80650 retry.go:31] will retry after 1.090615613s: waiting for machine to come up
	I0223 22:21:18.257489   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:18.257882   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:18.257949   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:18.257850   80650 retry.go:31] will retry after 1.207436911s: waiting for machine to come up
	I0223 22:21:19.467391   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:19.467804   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:19.467836   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:19.467758   80650 retry.go:31] will retry after 1.522373862s: waiting for machine to come up
	I0223 22:21:20.992569   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:20.992936   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:20.992965   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:20.992883   80650 retry.go:31] will retry after 2.133891724s: waiting for machine to come up
	I0223 22:21:23.129156   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:23.129626   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:23.129648   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:23.129597   80650 retry.go:31] will retry after 2.398257467s: waiting for machine to come up
	I0223 22:21:25.529031   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:25.529472   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:25.529508   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:25.529418   80650 retry.go:31] will retry after 2.616816039s: waiting for machine to come up
	I0223 22:21:28.149307   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:28.149703   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:28.149732   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:28.149668   80650 retry.go:31] will retry after 3.093858159s: waiting for machine to come up
	I0223 22:21:31.245491   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.245970   80620 main.go:141] libmachine: (multinode-773885) Found IP for machine: 192.168.39.240
	I0223 22:21:31.245992   80620 main.go:141] libmachine: (multinode-773885) Reserving static IP address...
	I0223 22:21:31.246035   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has current primary IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.246498   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.246523   80620 main.go:141] libmachine: (multinode-773885) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"}
	I0223 22:21:31.246531   80620 main.go:141] libmachine: (multinode-773885) Reserved static IP address: 192.168.39.240
	I0223 22:21:31.246540   80620 main.go:141] libmachine: (multinode-773885) Waiting for SSH to be available...
	I0223 22:21:31.246549   80620 main.go:141] libmachine: (multinode-773885) DBG | Getting to WaitForSSH function...
	I0223 22:21:31.248477   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.248821   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.248848   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.248945   80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH client type: external
	I0223 22:21:31.248970   80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa (-rw-------)
	I0223 22:21:31.249043   80620 main.go:141] libmachine: (multinode-773885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 22:21:31.249076   80620 main.go:141] libmachine: (multinode-773885) DBG | About to run SSH command:
	I0223 22:21:31.249094   80620 main.go:141] libmachine: (multinode-773885) DBG | exit 0
	I0223 22:21:31.338971   80620 main.go:141] libmachine: (multinode-773885) DBG | SSH cmd err, output: <nil>: 
	I0223 22:21:31.339315   80620 main.go:141] libmachine: (multinode-773885) Calling .GetConfigRaw
	I0223 22:21:31.339952   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:31.342708   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.343091   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.343112   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.343382   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:21:31.343587   80620 machine.go:88] provisioning docker machine ...
	I0223 22:21:31.343612   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:31.343856   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.344026   80620 buildroot.go:166] provisioning hostname "multinode-773885"
	I0223 22:21:31.344045   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.344189   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.346343   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.346741   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.346772   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.346912   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.347101   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.347235   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.347362   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.347563   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.347987   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.348001   80620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773885 && echo "multinode-773885" | sudo tee /etc/hostname
	I0223 22:21:31.483698   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885
	
	I0223 22:21:31.483729   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.486353   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.486705   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.486729   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.486927   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.487146   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.487349   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.487567   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.487765   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.488223   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.488247   80620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:21:31.610531   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:21:31.610563   80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
	I0223 22:21:31.610579   80620 buildroot.go:174] setting up certificates
	I0223 22:21:31.610589   80620 provision.go:83] configureAuth start
	I0223 22:21:31.610602   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.610887   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:31.613554   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.613875   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.613901   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.614087   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.616271   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.616732   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.616766   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.616828   80620 provision.go:138] copyHostCerts
	I0223 22:21:31.616880   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:21:31.616925   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
	I0223 22:21:31.616938   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:21:31.617049   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
	I0223 22:21:31.617142   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:21:31.617171   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
	I0223 22:21:31.617182   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:21:31.617225   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
	I0223 22:21:31.617338   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:21:31.617367   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
	I0223 22:21:31.617373   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:21:31.617412   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
	I0223 22:21:31.617475   80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-773885]
	I0223 22:21:31.813280   80620 provision.go:172] copyRemoteCerts
	I0223 22:21:31.813353   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:21:31.813402   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.816285   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.816679   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.816716   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.816918   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.817162   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.817351   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.817481   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:31.903913   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:21:31.904023   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 22:21:31.928843   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:21:31.928908   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 22:21:31.953083   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:21:31.953136   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:21:31.977825   80620 provision.go:86] duration metric: configureAuth took 367.222576ms
	I0223 22:21:31.977848   80620 buildroot.go:189] setting minikube options for container-runtime
	I0223 22:21:31.978069   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:31.978096   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:31.978344   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.980808   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.981196   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.981226   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.981404   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.981631   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.981794   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.981903   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.982052   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.982469   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.982488   80620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:21:32.100345   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0223 22:21:32.100366   80620 buildroot.go:70] root file system type: tmpfs
	I0223 22:21:32.100467   80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:21:32.100489   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:32.103003   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.103407   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:32.103436   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.103637   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:32.103824   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.103965   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.104148   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:32.104371   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:32.104858   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:32.104953   80620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:21:32.237312   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:21:32.237343   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:32.240081   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.240430   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:32.240481   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.240599   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:32.240764   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.240928   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.241022   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:32.241158   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:32.241558   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:32.241575   80620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:21:33.112176   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0223 22:21:33.112206   80620 machine.go:91] provisioned docker machine in 1.76860164s
	I0223 22:21:33.112216   80620 start.go:300] post-start starting for "multinode-773885" (driver="kvm2")
	I0223 22:21:33.112222   80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:21:33.112238   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.112595   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:21:33.112636   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.115711   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.116122   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.116159   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.116274   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.116476   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.116715   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.116933   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.204860   80620 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:21:33.208799   80620 command_runner.go:130] > NAME=Buildroot
	I0223 22:21:33.208819   80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0223 22:21:33.208823   80620 command_runner.go:130] > ID=buildroot
	I0223 22:21:33.208829   80620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0223 22:21:33.208833   80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0223 22:21:33.208858   80620 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 22:21:33.208867   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
	I0223 22:21:33.208924   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
	I0223 22:21:33.208996   80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
	I0223 22:21:33.209017   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
	I0223 22:21:33.209096   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:21:33.216834   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:21:33.238598   80620 start.go:303] post-start completed in 126.369412ms
	I0223 22:21:33.238618   80620 fix.go:57] fixHost completed within 19.892701007s
	I0223 22:21:33.238638   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.241628   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.242000   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.242020   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.242184   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.242377   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.242544   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.242697   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.242867   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:33.243253   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:33.243264   80620 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0223 22:21:33.359558   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190893.310436860
	
	I0223 22:21:33.359587   80620 fix.go:207] guest clock: 1677190893.310436860
	I0223 22:21:33.359596   80620 fix.go:220] Guest: 2023-02-23 22:21:33.31043686 +0000 UTC Remote: 2023-02-23 22:21:33.238622371 +0000 UTC m=+20.014549698 (delta=71.814489ms)
	I0223 22:21:33.359621   80620 fix.go:191] guest clock delta is within tolerance: 71.814489ms
	I0223 22:21:33.359628   80620 start.go:83] releasing machines lock for "multinode-773885", held for 20.013722401s
	I0223 22:21:33.359654   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.359925   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:33.362448   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.362830   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.362872   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.362979   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363495   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363673   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363761   80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:21:33.363798   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.363978   80620 ssh_runner.go:195] Run: cat /version.json
	I0223 22:21:33.364008   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.366567   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.366853   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.366894   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.366918   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.367103   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.367284   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.367338   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.367363   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.367483   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.367511   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.367637   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.367796   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.367946   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.368088   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.472525   80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:21:33.472587   80620 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
	I0223 22:21:33.472717   80620 ssh_runner.go:195] Run: systemctl --version
	I0223 22:21:33.478170   80620 command_runner.go:130] > systemd 247 (247)
	I0223 22:21:33.478214   80620 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0223 22:21:33.478449   80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:21:33.483322   80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0223 22:21:33.483517   80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 22:21:33.483559   80620 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:21:33.486877   80620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:21:33.486963   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:21:33.494937   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:21:33.509789   80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:21:33.522704   80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0223 22:21:33.523037   80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 22:21:33.523053   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:33.523114   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:33.547334   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:33.547357   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:33.547366   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:33.547373   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:33.547379   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:33.547386   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:33.547393   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:33.547402   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:33.547409   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:33.547429   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:33.547437   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:33.548840   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:33.548856   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:21:33.548865   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:21:33.548962   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:21:33.565249   80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:21:33.565271   80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:21:33.565339   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:21:33.574475   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:21:33.582936   80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:21:33.582977   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:21:33.591609   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:21:33.600301   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:21:33.608920   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:21:33.617470   80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:21:33.626224   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:21:33.634536   80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:21:33.642631   80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:21:33.642679   80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:21:33.650322   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:33.748276   80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:21:33.765231   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:21:33.765298   80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:21:33.783055   80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0223 22:21:33.783552   80620 command_runner.go:130] > [Unit]
	I0223 22:21:33.783568   80620 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:21:33.783574   80620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:21:33.783579   80620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0223 22:21:33.783584   80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0223 22:21:33.783589   80620 command_runner.go:130] > StartLimitBurst=3
	I0223 22:21:33.783595   80620 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:21:33.783598   80620 command_runner.go:130] > [Service]
	I0223 22:21:33.783603   80620 command_runner.go:130] > Type=notify
	I0223 22:21:33.783607   80620 command_runner.go:130] > Restart=on-failure
	I0223 22:21:33.783614   80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:21:33.783625   80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:21:33.783631   80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:21:33.783640   80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:21:33.783647   80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:21:33.783653   80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:21:33.783660   80620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:21:33.783668   80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:21:33.783674   80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:21:33.783678   80620 command_runner.go:130] > ExecStart=
	I0223 22:21:33.783691   80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0223 22:21:33.783696   80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:21:33.783702   80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:21:33.783708   80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:21:33.783712   80620 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:21:33.783715   80620 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:21:33.783719   80620 command_runner.go:130] > LimitCORE=infinity
	I0223 22:21:33.783724   80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:21:33.783728   80620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:21:33.783733   80620 command_runner.go:130] > TasksMax=infinity
	I0223 22:21:33.783736   80620 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:21:33.783742   80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:21:33.783746   80620 command_runner.go:130] > Delegate=yes
	I0223 22:21:33.783751   80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:21:33.783755   80620 command_runner.go:130] > KillMode=process
	I0223 22:21:33.783758   80620 command_runner.go:130] > [Install]
	I0223 22:21:33.783765   80620 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:21:33.784203   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:21:33.800310   80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0223 22:21:33.820089   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:21:33.831934   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:21:33.843320   80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 22:21:33.870509   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:21:33.882768   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:21:33.898405   80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:21:33.898433   80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:21:33.898700   80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:21:33.998916   80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:21:34.101490   80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:21:34.101526   80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:21:34.117559   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:34.221898   80620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:21:35.643194   80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.421256026s)
	I0223 22:21:35.643291   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:21:35.759716   80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:21:35.863224   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:21:35.965951   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:36.072240   80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:21:36.092427   80620 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:21:36.092508   80620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:21:36.104108   80620 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:21:36.104128   80620 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:21:36.104134   80620 command_runner.go:130] > Device: 16h/22d	Inode: 814         Links: 1
	I0223 22:21:36.104143   80620 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0223 22:21:36.104156   80620 command_runner.go:130] > Access: 2023-02-23 22:21:36.038985633 +0000
	I0223 22:21:36.104168   80620 command_runner.go:130] > Modify: 2023-02-23 22:21:36.038985633 +0000
	I0223 22:21:36.104180   80620 command_runner.go:130] > Change: 2023-02-23 22:21:36.041985633 +0000
	I0223 22:21:36.104189   80620 command_runner.go:130] >  Birth: -
	I0223 22:21:36.104213   80620 start.go:553] Will wait 60s for crictl version
	I0223 22:21:36.104260   80620 ssh_runner.go:195] Run: which crictl
	I0223 22:21:36.110223   80620 command_runner.go:130] > /usr/bin/crictl
	I0223 22:21:36.110588   80620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:21:36.185549   80620 command_runner.go:130] > Version:  0.1.0
	I0223 22:21:36.185577   80620 command_runner.go:130] > RuntimeName:  docker
	I0223 22:21:36.185585   80620 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0223 22:21:36.185593   80620 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:21:36.185626   80620 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0223 22:21:36.185698   80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:21:36.217919   80620 command_runner.go:130] > 20.10.23
	I0223 22:21:36.219196   80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:21:36.248973   80620 command_runner.go:130] > 20.10.23
	I0223 22:21:36.253095   80620 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0223 22:21:36.253136   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:36.255830   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:36.256233   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:36.256260   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:36.256492   80620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0223 22:21:36.260126   80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:21:36.272218   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:36.272269   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:36.294497   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:36.294518   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:36.294523   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:36.294528   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:36.294532   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:36.294536   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:36.294541   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:36.294546   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:36.294550   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:36.294554   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:36.294558   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:36.295537   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:36.295553   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:21:36.295600   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:36.317087   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:36.317104   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:36.317109   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:36.317114   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:36.317119   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:36.317123   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:36.317127   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:36.317133   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:36.317137   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:36.317142   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:36.317149   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:36.318116   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:36.318131   80620 cache_images.go:84] Images are preloaded, skipping loading
	I0223 22:21:36.318198   80620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:21:36.351288   80620 command_runner.go:130] > cgroupfs
	I0223 22:21:36.352347   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:36.352366   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:36.352384   80620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:21:36.352404   80620 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773885 NodeName:multinode-773885 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:21:36.352535   80620 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-773885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:21:36.352608   80620 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-773885 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:21:36.352654   80620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:21:36.361734   80620 command_runner.go:130] > kubeadm
	I0223 22:21:36.361745   80620 command_runner.go:130] > kubectl
	I0223 22:21:36.361749   80620 command_runner.go:130] > kubelet
	I0223 22:21:36.361984   80620 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:21:36.362045   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 22:21:36.369631   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0223 22:21:36.384815   80620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:21:36.399471   80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0223 22:21:36.414791   80620 ssh_runner.go:195] Run: grep 192.168.39.240	control-plane.minikube.internal$ /etc/hosts
	I0223 22:21:36.418133   80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:21:36.429567   80620 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885 for IP: 192.168.39.240
	I0223 22:21:36.429596   80620 certs.go:186] acquiring lock for shared ca certs: {Name:mkb47a35d7b33f6ba829c92dc16cfaf70cb716c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:36.429732   80620 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key
	I0223 22:21:36.429768   80620 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key
	I0223 22:21:36.429863   80620 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key
	I0223 22:21:36.429933   80620 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key.ac2ca5a7
	I0223 22:21:36.429971   80620 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key
	I0223 22:21:36.429982   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 22:21:36.429999   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 22:21:36.430009   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 22:21:36.430023   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 22:21:36.430035   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:21:36.430047   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:21:36.430058   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:21:36.430070   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:21:36.430120   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem (1338 bytes)
	W0223 22:21:36.430145   80620 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927_empty.pem, impossibly tiny 0 bytes
	I0223 22:21:36.430155   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:21:36.430178   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem (1078 bytes)
	I0223 22:21:36.430200   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:21:36.430224   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem (1671 bytes)
	I0223 22:21:36.430265   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:21:36.430293   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.430307   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.430319   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem -> /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.430835   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 22:21:36.452666   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 22:21:36.474354   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 22:21:36.496347   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 22:21:36.518192   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:21:36.539742   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:21:36.561567   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:21:36.582936   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:21:36.605667   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /usr/share/ca-certificates/669272.pem (1708 bytes)
	I0223 22:21:36.627349   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:21:36.649138   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem --> /usr/share/ca-certificates/66927.pem (1338 bytes)
	I0223 22:21:36.670645   80620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 22:21:36.685674   80620 ssh_runner.go:195] Run: openssl version
	I0223 22:21:36.690629   80620 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0223 22:21:36.690924   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66927.pem && ln -fs /usr/share/ca-certificates/66927.pem /etc/ssl/certs/66927.pem"
	I0223 22:21:36.699754   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.703759   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.704095   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.704128   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.709182   80620 command_runner.go:130] > 51391683
	I0223 22:21:36.709238   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/66927.pem /etc/ssl/certs/51391683.0"
	I0223 22:21:36.718122   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669272.pem && ln -fs /usr/share/ca-certificates/669272.pem /etc/ssl/certs/669272.pem"
	I0223 22:21:36.726789   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730766   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730841   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730885   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.735795   80620 command_runner.go:130] > 3ec20f2e
	I0223 22:21:36.736176   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/669272.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:21:36.745026   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:21:36.753682   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757609   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757830   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757864   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.762876   80620 command_runner.go:130] > b5213941
	I0223 22:21:36.762930   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:21:36.771746   80620 kubeadm.go:401] StartCluster: {Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP:}
	I0223 22:21:36.771889   80620 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:21:36.795673   80620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 22:21:36.804158   80620 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0223 22:21:36.804177   80620 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0223 22:21:36.804208   80620 command_runner.go:130] > /var/lib/minikube/etcd:
	I0223 22:21:36.804223   80620 command_runner.go:130] > member
	I0223 22:21:36.804253   80620 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 22:21:36.804270   80620 kubeadm.go:633] restartCluster start
	I0223 22:21:36.804326   80620 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 22:21:36.812345   80620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:36.812718   80620 kubeconfig.go:135] verify returned: extract IP: "multinode-773885" does not appear in /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:36.812798   80620 kubeconfig.go:146] "multinode-773885" context is missing from /home/jenkins/minikube-integration/15909-59858/kubeconfig - will repair!
	I0223 22:21:36.813094   80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:36.813506   80620 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:36.813719   80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:21:36.814424   80620 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 22:21:36.814616   80620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 22:21:36.822391   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:36.822434   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:36.832386   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:37.333153   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:37.333231   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:37.344298   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:37.832833   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:37.832931   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:37.843863   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:38.333039   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:38.333157   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:38.344397   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:38.833335   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:38.833418   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:38.844307   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:39.332585   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:39.332660   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:39.343665   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:39.833274   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:39.833358   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:39.844484   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:40.332983   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:40.333065   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:40.344099   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:40.832657   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:40.832750   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:40.843615   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:41.333154   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:41.333245   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:41.344059   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:41.832619   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:41.832703   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:41.843654   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:42.333248   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:42.333328   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:42.344533   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:42.833157   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:42.833256   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:42.843975   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:43.333351   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:43.333418   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:43.344740   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:43.832562   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:43.832672   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:43.843659   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:44.333327   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:44.333407   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:44.344578   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:44.833173   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:44.833245   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:44.844332   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:45.332909   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:45.333037   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:45.344107   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:45.832647   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:45.832732   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:45.843986   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.332538   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.332617   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.343428   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.833367   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.833455   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.844521   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.844541   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.844582   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.854411   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.854446   80620 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 22:21:46.854455   80620 kubeadm.go:1120] stopping kube-system containers ...
	I0223 22:21:46.854520   80620 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:21:46.882631   80620 command_runner.go:130] > a31cf43457e0
	I0223 22:21:46.882655   80620 command_runner.go:130] > b83daa4cdd8d
	I0223 22:21:46.882661   80620 command_runner.go:130] > 75e472928e30
	I0223 22:21:46.882666   80620 command_runner.go:130] > 20f2e353f8d4
	I0223 22:21:46.882674   80620 command_runner.go:130] > f6b2b873cba9
	I0223 22:21:46.882682   80620 command_runner.go:130] > 6becaf5c8640
	I0223 22:21:46.882688   80620 command_runner.go:130] > a2a9a29b5a41
	I0223 22:21:46.882694   80620 command_runner.go:130] > f284ce294fa0
	I0223 22:21:46.882700   80620 command_runner.go:130] > 8d29ee663e61
	I0223 22:21:46.882707   80620 command_runner.go:130] > baad115b76c6
	I0223 22:21:46.882725   80620 command_runner.go:130] > 53723346fe3c
	I0223 22:21:46.882735   80620 command_runner.go:130] > 6a41aad93299
	I0223 22:21:46.882743   80620 command_runner.go:130] > 745d6ec7adf4
	I0223 22:21:46.882750   80620 command_runner.go:130] > 979e703c6176
	I0223 22:21:46.882757   80620 command_runner.go:130] > 3b6e6d975efa
	I0223 22:21:46.882766   80620 command_runner.go:130] > 072b5f08a10f
	I0223 22:21:46.882797   80620 docker.go:456] Stopping containers: [a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f]
	I0223 22:21:46.882868   80620 ssh_runner.go:195] Run: docker stop a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f
	I0223 22:21:46.908823   80620 command_runner.go:130] > a31cf43457e0
	I0223 22:21:46.908844   80620 command_runner.go:130] > b83daa4cdd8d
	I0223 22:21:46.908853   80620 command_runner.go:130] > 75e472928e30
	I0223 22:21:46.908858   80620 command_runner.go:130] > 20f2e353f8d4
	I0223 22:21:46.908865   80620 command_runner.go:130] > f6b2b873cba9
	I0223 22:21:46.908870   80620 command_runner.go:130] > 6becaf5c8640
	I0223 22:21:46.908876   80620 command_runner.go:130] > a2a9a29b5a41
	I0223 22:21:46.909404   80620 command_runner.go:130] > f284ce294fa0
	I0223 22:21:46.909419   80620 command_runner.go:130] > 8d29ee663e61
	I0223 22:21:46.909424   80620 command_runner.go:130] > baad115b76c6
	I0223 22:21:46.909441   80620 command_runner.go:130] > 53723346fe3c
	I0223 22:21:46.909828   80620 command_runner.go:130] > 6a41aad93299
	I0223 22:21:46.909847   80620 command_runner.go:130] > 745d6ec7adf4
	I0223 22:21:46.909853   80620 command_runner.go:130] > 979e703c6176
	I0223 22:21:46.909858   80620 command_runner.go:130] > 3b6e6d975efa
	I0223 22:21:46.909864   80620 command_runner.go:130] > 072b5f08a10f
	I0223 22:21:46.911025   80620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 22:21:46.925825   80620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 22:21:46.933780   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 22:21:46.933807   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 22:21:46.933818   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 22:21:46.933842   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:21:46.934068   80620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:21:46.934127   80620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 22:21:46.942292   80620 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 22:21:46.942311   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.060140   80620 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:21:47.060421   80620 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 22:21:47.060722   80620 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 22:21:47.061266   80620 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 22:21:47.061579   80620 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0223 22:21:47.062097   80620 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0223 22:21:47.062730   80620 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0223 22:21:47.063273   80620 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0223 22:21:47.063668   80620 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0223 22:21:47.064166   80620 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 22:21:47.064500   80620 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 22:21:47.064789   80620 command_runner.go:130] > [certs] Using the existing "sa" key
	I0223 22:21:47.066082   80620 command_runner.go:130] ! W0223 22:21:47.003599    1259 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.066190   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.118462   80620 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:21:47.207705   80620 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:21:47.310176   80620 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:21:47.491530   80620 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:21:47.570853   80620 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:21:47.573364   80620 command_runner.go:130] ! W0223 22:21:47.061082    1265 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.573502   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.637325   80620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:21:47.638644   80620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:21:47.638664   80620 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:21:47.751602   80620 command_runner.go:130] ! W0223 22:21:47.567753    1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.751640   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.811937   80620 command_runner.go:130] ! W0223 22:21:47.761774    1293 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.829349   80620 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:21:47.829375   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:21:47.829384   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:21:47.829392   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:21:47.829573   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.919203   80620 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:21:47.922916   80620 command_runner.go:130] ! W0223 22:21:47.858650    1302 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.923089   80620 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:21:47.923171   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:48.438055   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:48.938524   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:49.437773   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:49.938504   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:50.438625   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:50.455679   80620 command_runner.go:130] > 1675
	I0223 22:21:50.456038   80620 api_server.go:71] duration metric: took 2.532952682s to wait for apiserver process to appear ...
	I0223 22:21:50.456061   80620 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:21:50.456073   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:50.456563   80620 api_server.go:268] stopped: https://192.168.39.240:8443/healthz: Get "https://192.168.39.240:8443/healthz": dial tcp 192.168.39.240:8443: connect: connection refused
	I0223 22:21:50.957285   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:53.851413   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 22:21:53.851440   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 22:21:53.957622   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:53.962959   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:53.962996   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:54.457567   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:54.462593   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:54.462613   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:54.957140   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:54.975573   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:54.975619   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:55.457159   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:55.468052   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:21:55.468134   80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0223 22:21:55.468145   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:55.468159   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:55.468173   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:55.478605   80620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0223 22:21:55.478631   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:55.478639   80620 round_trippers.go:580]     Content-Length: 263
	I0223 22:21:55.478645   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:55 GMT
	I0223 22:21:55.478651   80620 round_trippers.go:580]     Audit-Id: 0e80152b-56d5-4ba7-8d3d-ebf4ef092ec4
	I0223 22:21:55.478656   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:55.478661   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:55.478667   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:55.478677   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:55.478720   80620 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:21:55.478820   80620 api_server.go:140] control plane version: v1.26.1
	I0223 22:21:55.478837   80620 api_server.go:130] duration metric: took 5.022769855s to wait for apiserver health ...
	I0223 22:21:55.478847   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:55.478864   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:55.481215   80620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 22:21:55.482654   80620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:21:55.487827   80620 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:21:55.487850   80620 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0223 22:21:55.487860   80620 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0223 22:21:55.487870   80620 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:21:55.487881   80620 command_runner.go:130] > Access: 2023-02-23 22:21:25.431985633 +0000
	I0223 22:21:55.487897   80620 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0223 22:21:55.487905   80620 command_runner.go:130] > Change: 2023-02-23 22:21:23.668985633 +0000
	I0223 22:21:55.487910   80620 command_runner.go:130] >  Birth: -
	I0223 22:21:55.488315   80620 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:21:55.488335   80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:21:55.519404   80620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:21:56.635297   80620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:21:56.642116   80620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:21:56.645709   80620 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 22:21:56.664280   80620 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 22:21:56.666573   80620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.147136699s)
	I0223 22:21:56.666612   80620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:21:56.666717   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:21:56.666728   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.666739   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.666748   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.670034   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:21:56.670049   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.670056   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.670062   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.670081   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.670087   80620 round_trippers.go:580]     Audit-Id: 03e54a77-0840-4896-9a52-5cdd73109000
	I0223 22:21:56.670100   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.670111   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.671358   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
	I0223 22:21:56.675255   80620 system_pods.go:59] 12 kube-system pods found
	I0223 22:21:56.675279   80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:21:56.675286   80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 22:21:56.675291   80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:21:56.675295   80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:21:56.675316   80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:21:56.675325   80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:21:56.675337   80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 22:21:56.675345   80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:21:56.675349   80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:21:56.675356   80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:21:56.675361   80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 22:21:56.675367   80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:21:56.675372   80620 system_pods.go:74] duration metric: took 8.754325ms to wait for pod list to return data ...
	I0223 22:21:56.675385   80620 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:21:56.675430   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0223 22:21:56.675437   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.675444   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.675451   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.680543   80620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 22:21:56.680557   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.680564   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.680569   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.680577   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.680582   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.680589   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.680597   80620 round_trippers.go:580]     Audit-Id: e86d112e-250e-4963-a6fb-b8fd3c902f59
	I0223 22:21:56.681128   80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16319 chars]
	I0223 22:21:56.681878   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681909   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681918   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681922   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681926   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681932   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681938   80620 node_conditions.go:105] duration metric: took 6.549163ms to run NodePressure ...
	I0223 22:21:56.681958   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:56.825426   80620 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 22:21:56.885114   80620 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 22:21:56.886787   80620 command_runner.go:130] ! W0223 22:21:56.690228    2212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:56.886832   80620 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 22:21:56.886942   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0223 22:21:56.886954   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.886965   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.886975   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.889503   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.889525   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.889536   80620 round_trippers.go:580]     Audit-Id: a9179ace-0f8b-41d7-acc9-15a5468f5431
	I0223 22:21:56.889545   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.889552   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.889561   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.889569   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.889582   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.890569   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29273 chars]
	I0223 22:21:56.891994   80620 kubeadm.go:784] kubelet initialised
	I0223 22:21:56.892020   80620 kubeadm.go:785] duration metric: took 5.174392ms waiting for restarted kubelet to initialise ...
	I0223 22:21:56.892029   80620 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:21:56.892094   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:21:56.892105   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.892115   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.892126   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.898216   80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 22:21:56.898231   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.898240   80620 round_trippers.go:580]     Audit-Id: 0cbc9df8-5ddc-4405-a649-09747f9c7e5c
	I0223 22:21:56.898250   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.898260   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.898268   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.898280   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.898290   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.899125   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
	I0223 22:21:56.901600   80620 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.901668   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:21:56.901680   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.901690   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.901697   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.906528   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:21:56.906543   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.906552   80620 round_trippers.go:580]     Audit-Id: c55b1693-f442-4306-a674-87f938885743
	I0223 22:21:56.906561   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.906571   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.906580   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.906589   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.906602   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.906875   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:21:56.907276   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.907287   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.907294   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.907312   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.916593   80620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0223 22:21:56.916608   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.916616   80620 round_trippers.go:580]     Audit-Id: 3b9497a6-fa4c-472e-b004-b0b6906e7a7f
	I0223 22:21:56.916625   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.916634   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.916644   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.916652   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.916662   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.916802   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.917117   80620 pod_ready.go:97] node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.917132   80620 pod_ready.go:81] duration metric: took 15.512217ms waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.917139   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.917145   80620 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.917197   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
	I0223 22:21:56.917206   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.917213   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.917219   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.919079   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.919091   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.919097   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.919103   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.919108   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.919114   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.919120   80620 round_trippers.go:580]     Audit-Id: 143d00d2-5e6b-44b2-a517-c658e2dc5a9f
	I0223 22:21:56.919129   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.919346   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6289 chars]
	I0223 22:21:56.919779   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.919793   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.919802   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.919808   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.921391   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.921406   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.921413   80620 round_trippers.go:580]     Audit-Id: 9f5eac9e-078a-4143-9d6d-1b1de0a3102a
	I0223 22:21:56.921423   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.921431   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.921440   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.921450   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.921460   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.921618   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.921957   80620 pod_ready.go:97] node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.921972   80620 pod_ready.go:81] duration metric: took 4.821003ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.921981   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.921998   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.922055   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
	I0223 22:21:56.922065   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.922076   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.922089   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.925010   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.925024   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.925033   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.925043   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.925052   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.925061   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.925070   80620 round_trippers.go:580]     Audit-Id: 422d48f0-48d6-4c16-8b22-40f26357fc34
	I0223 22:21:56.925075   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.925261   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"282","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0223 22:21:56.925639   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.925652   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.925659   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.925666   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.927337   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.927356   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.927365   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.927373   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.927382   80620 round_trippers.go:580]     Audit-Id: 020b9a46-ef43-4607-90e4-5d3e9e7d1a08
	I0223 22:21:56.927392   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.927401   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.927413   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.927579   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.927921   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.927940   80620 pod_ready.go:81] duration metric: took 5.928725ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.927950   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.927957   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.928048   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
	I0223 22:21:56.928062   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.928072   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.928082   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.930936   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.930950   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.930956   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.930961   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.930968   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.930982   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.930995   80620 round_trippers.go:580]     Audit-Id: 00aa01ac-5a84-4085-b3b5-f5f6d06fbe47
	I0223 22:21:56.931005   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.931218   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"739","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7424 chars]
	I0223 22:21:57.067070   80620 request.go:622] Waited for 135.338555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.067135   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.067145   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.067163   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.067176   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.070119   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.070137   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.070143   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.070149   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.070155   80620 round_trippers.go:580]     Audit-Id: 5d3402dd-3874-4131-9278-561b1ef77762
	I0223 22:21:57.070161   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.070167   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.070178   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.070297   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:57.070668   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.070691   80620 pod_ready.go:81] duration metric: took 142.727116ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:57.070704   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.070713   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.267166   80620 request.go:622] Waited for 196.388978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:21:57.267229   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:21:57.267239   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.267252   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.267264   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.269968   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.269991   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.270000   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.270012   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.270084   80620 round_trippers.go:580]     Audit-Id: 27049171-e30c-4ab9-a6ed-77da398a4856
	I0223 22:21:57.270104   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.270113   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.270123   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.270261   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0223 22:21:57.467146   80620 request.go:622] Waited for 196.375195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:21:57.467201   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:21:57.467207   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.467216   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.467235   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.469655   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.469680   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.469690   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.469716   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.469727   80620 round_trippers.go:580]     Audit-Id: d420f22f-77bb-4122-826c-40660cb2d6fb
	I0223 22:21:57.469734   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.469741   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.469749   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.469921   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0223 22:21:57.470230   80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
	I0223 22:21:57.470242   80620 pod_ready.go:81] duration metric: took 399.521519ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.470250   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.667697   80620 request.go:622] Waited for 197.385632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:21:57.667766   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:21:57.667771   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.667778   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.667785   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.670278   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.670298   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.670308   80620 round_trippers.go:580]     Audit-Id: 0128213a-339a-470c-989d-e7b486abebe1
	I0223 22:21:57.670316   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.670324   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.670333   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.670342   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.670351   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.670879   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"377","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 22:21:57.867695   80620 request.go:622] Waited for 196.388162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.867765   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.867770   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.867778   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.867784   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.870409   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.870431   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.870442   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.870452   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.870460   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.870466   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.870474   80620 round_trippers.go:580]     Audit-Id: a53d6f4e-2730-4846-9147-87d2b5b1bc56
	I0223 22:21:57.870483   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.870627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:57.870935   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.870951   80620 pod_ready.go:81] duration metric: took 400.694245ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:57.870962   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.870970   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.067390   80620 request.go:622] Waited for 196.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:21:58.067527   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:21:58.067575   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.067593   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.067604   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.071162   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:21:58.071181   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.071191   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.071199   80620 round_trippers.go:580]     Audit-Id: 49f82db0-63aa-4950-9457-03eeb73d1c6f
	I0223 22:21:58.071207   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.071215   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.071223   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.071231   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.071517   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0223 22:21:58.267044   80620 request.go:622] Waited for 195.100843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:21:58.267131   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:21:58.267138   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.267150   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.267161   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.269786   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.269805   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.269812   80620 round_trippers.go:580]     Audit-Id: 28398178-6b4f-4ced-bd50-76b0a4e432c0
	I0223 22:21:58.269818   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.269823   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.269828   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.269833   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.269846   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.270022   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
	I0223 22:21:58.270353   80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
	I0223 22:21:58.270367   80620 pod_ready.go:81] duration metric: took 399.384993ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.270378   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.467272   80620 request.go:622] Waited for 196.812846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:21:58.467358   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:21:58.467365   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.467376   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.467390   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.470141   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.470169   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.470179   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.470188   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.470195   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.470204   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.470213   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.470221   80620 round_trippers.go:580]     Audit-Id: e5044b8f-aa40-4729-93fe-c25c71ca551c
	I0223 22:21:58.470349   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"742","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5136 chars]
	I0223 22:21:58.667199   80620 request.go:622] Waited for 196.342723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.667264   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.667275   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.667288   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.667318   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.669825   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.669849   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.669860   80620 round_trippers.go:580]     Audit-Id: 8c1fc862-a3d1-4b08-b8c2-f41fa6fd3cd6
	I0223 22:21:58.669869   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.669877   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.669885   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.669899   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.669910   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.670129   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:58.670496   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:58.670517   80620 pod_ready.go:81] duration metric: took 400.130245ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:58.670528   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:58.670539   80620 pod_ready.go:38] duration metric: took 1.778499138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:21:58.670563   80620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 22:21:58.684600   80620 command_runner.go:130] > -16
	I0223 22:21:58.684633   80620 ops.go:34] apiserver oom_adj: -16
	I0223 22:21:58.684642   80620 kubeadm.go:637] restartCluster took 21.880365731s
	I0223 22:21:58.684651   80620 kubeadm.go:403] StartCluster complete in 21.912911073s
	I0223 22:21:58.684672   80620 settings.go:142] acquiring lock: {Name:mk906211444ec0c60982da29f94c92fb57d72ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:58.684774   80620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:58.685563   80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:58.685892   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 22:21:58.686005   80620 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0223 22:21:58.686136   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:58.686171   80620 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:58.687964   80620 out.go:177] * Enabled addons: 
	I0223 22:21:58.686508   80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:21:58.689318   80620 addons.go:492] enable addons completed in 3.316295ms: enabled=[]
	I0223 22:21:58.689636   80620 round_trippers.go:463] GET https://192.168.39.240:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:21:58.689653   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.689665   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.689674   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.692405   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.692425   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.692435   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.692448   80620 round_trippers.go:580]     Audit-Id: 2916b551-1504-4ee6-8f0b-8bb9b49c72fe
	I0223 22:21:58.692457   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.692474   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.692486   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.692499   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.692512   80620 round_trippers.go:580]     Content-Length: 291
	I0223 22:21:58.692541   80620 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88095e59-4c47-4f2e-9af0-397e7cc508de","resourceVersion":"743","creationTimestamp":"2023-02-23T22:17:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:21:58.692706   80620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773885" context rescaled to 1 replicas
	I0223 22:21:58.692739   80620 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:21:58.694468   80620 out.go:177] * Verifying Kubernetes components...
	I0223 22:21:58.696081   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:21:58.815357   80620 command_runner.go:130] > apiVersion: v1
	I0223 22:21:58.815388   80620 command_runner.go:130] > data:
	I0223 22:21:58.815395   80620 command_runner.go:130] >   Corefile: |
	I0223 22:21:58.815401   80620 command_runner.go:130] >     .:53 {
	I0223 22:21:58.815406   80620 command_runner.go:130] >         log
	I0223 22:21:58.815414   80620 command_runner.go:130] >         errors
	I0223 22:21:58.815423   80620 command_runner.go:130] >         health {
	I0223 22:21:58.815430   80620 command_runner.go:130] >            lameduck 5s
	I0223 22:21:58.815435   80620 command_runner.go:130] >         }
	I0223 22:21:58.815443   80620 command_runner.go:130] >         ready
	I0223 22:21:58.815455   80620 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 22:21:58.815461   80620 command_runner.go:130] >            pods insecure
	I0223 22:21:58.815470   80620 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 22:21:58.815479   80620 command_runner.go:130] >            ttl 30
	I0223 22:21:58.815485   80620 command_runner.go:130] >         }
	I0223 22:21:58.815495   80620 command_runner.go:130] >         prometheus :9153
	I0223 22:21:58.815501   80620 command_runner.go:130] >         hosts {
	I0223 22:21:58.815510   80620 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0223 22:21:58.815517   80620 command_runner.go:130] >            fallthrough
	I0223 22:21:58.815526   80620 command_runner.go:130] >         }
	I0223 22:21:58.815537   80620 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 22:21:58.815545   80620 command_runner.go:130] >            max_concurrent 1000
	I0223 22:21:58.815553   80620 command_runner.go:130] >         }
	I0223 22:21:58.815563   80620 command_runner.go:130] >         cache 30
	I0223 22:21:58.815574   80620 command_runner.go:130] >         loop
	I0223 22:21:58.815583   80620 command_runner.go:130] >         reload
	I0223 22:21:58.815595   80620 command_runner.go:130] >         loadbalance
	I0223 22:21:58.815605   80620 command_runner.go:130] >     }
	I0223 22:21:58.815614   80620 command_runner.go:130] > kind: ConfigMap
	I0223 22:21:58.815623   80620 command_runner.go:130] > metadata:
	I0223 22:21:58.815631   80620 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:17:37Z"
	I0223 22:21:58.815641   80620 command_runner.go:130] >   name: coredns
	I0223 22:21:58.815651   80620 command_runner.go:130] >   namespace: kube-system
	I0223 22:21:58.815660   80620 command_runner.go:130] >   resourceVersion: "360"
	I0223 22:21:58.815671   80620 command_runner.go:130] >   uid: 79632023-f720-4e05-a063-411c24789887
	I0223 22:21:58.818640   80620 node_ready.go:35] waiting up to 6m0s for node "multinode-773885" to be "Ready" ...
	I0223 22:21:58.818784   80620 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 22:21:58.866997   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.867022   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.867036   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.867046   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.869514   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.869542   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.869553   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.869562   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.869568   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.869573   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.869579   80620 round_trippers.go:580]     Audit-Id: ef8ca951-03a3-4673-b3b0-d6e949e3aba1
	I0223 22:21:58.869586   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.869696   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:59.370801   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:59.370828   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:59.370840   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:59.370850   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:59.373237   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:59.373263   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:59.373275   80620 round_trippers.go:580]     Audit-Id: cc5c5f53-65a1-48f1-8d30-2983a96a1517
	I0223 22:21:59.373284   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:59.373292   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:59.373301   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:59.373310   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:59.373320   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:59 GMT
	I0223 22:21:59.373432   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:59.871104   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:59.871130   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:59.871142   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:59.871152   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:59.873824   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:59.873849   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:59.873860   80620 round_trippers.go:580]     Audit-Id: a0c12052-13ba-4532-b2cb-ef0712468e2c
	I0223 22:21:59.873868   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:59.873877   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:59.873890   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:59.873898   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:59.873910   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:59 GMT
	I0223 22:21:59.874344   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.371108   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:00.371138   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:00.371150   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:00.371160   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:00.373796   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:00.373818   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:00.373826   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:00.373832   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:00.373837   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:00.373843   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:00 GMT
	I0223 22:22:00.373849   80620 round_trippers.go:580]     Audit-Id: 6d76f1af-c5ab-44d4-ac95-d4a732c54af0
	I0223 22:22:00.373861   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:00.374155   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.870897   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:00.870933   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:00.870942   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:00.870951   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:00.873427   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:00.873451   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:00.873462   80620 round_trippers.go:580]     Audit-Id: 494f6db1-2d29-4a14-be25-f5115f464c6c
	I0223 22:22:00.873471   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:00.873485   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:00.873495   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:00.873504   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:00.873512   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:00 GMT
	I0223 22:22:00.873654   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.874130   80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
	I0223 22:22:01.370246   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:01.370268   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:01.370279   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:01.370286   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:01.372742   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:01.372768   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:01.372779   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:01.372787   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:01.372796   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:01.372808   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:01 GMT
	I0223 22:22:01.372816   80620 round_trippers.go:580]     Audit-Id: d657d94b-1177-4e47-9c6a-10517add9c29
	I0223 22:22:01.372827   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:01.372974   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:01.870635   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:01.870664   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:01.870672   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:01.870679   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:01.873350   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:01.873373   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:01.873386   80620 round_trippers.go:580]     Audit-Id: 3aae1eee-a094-424f-bbd3-1cc775206a05
	I0223 22:22:01.873395   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:01.873403   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:01.873410   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:01.873419   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:01.873428   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:01 GMT
	I0223 22:22:01.873701   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.370356   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:02.370378   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:02.370386   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:02.370392   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:02.373961   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:02.373983   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:02.373992   80620 round_trippers.go:580]     Audit-Id: 2d8ae255-30e7-495f-82a8-f977058510be
	I0223 22:22:02.374000   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:02.374008   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:02.374018   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:02.374028   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:02.374041   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:02 GMT
	I0223 22:22:02.374362   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.871107   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:02.871133   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:02.871148   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:02.871157   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:02.873653   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:02.873672   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:02.873680   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:02.873686   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:02.873691   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:02 GMT
	I0223 22:22:02.873697   80620 round_trippers.go:580]     Audit-Id: 88e3a2a0-3a44-456c-a122-9443f9691153
	I0223 22:22:02.873706   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:02.873715   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:02.874022   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.874437   80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
	I0223 22:22:03.370842   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:03.370869   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:03.370886   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:03.370894   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:03.372889   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:03.372909   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:03.372916   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:03.372922   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:03 GMT
	I0223 22:22:03.372928   80620 round_trippers.go:580]     Audit-Id: 553e23aa-d7b4-4f46-b968-491b3c19b7a9
	I0223 22:22:03.372934   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:03.372942   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:03.372954   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:03.373055   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:03.870742   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:03.870764   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:03.870773   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:03.870779   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:03.873449   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:03.873469   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:03.873476   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:03.873482   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:03.873487   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:03 GMT
	I0223 22:22:03.873493   80620 round_trippers.go:580]     Audit-Id: d10ccbbb-11df-43ab-9526-c648f4eb57ab
	I0223 22:22:03.873499   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:03.873504   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:03.873699   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:04.370303   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.370324   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.370332   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.370339   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.372813   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.372839   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.372851   80620 round_trippers.go:580]     Audit-Id: bdad9e22-9644-4e1c-8f6c-ae6fc5d4caf1
	I0223 22:22:04.372861   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.372870   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.372879   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.372893   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.372902   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.373649   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:04.870293   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.870319   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.870327   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.870333   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.873111   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.873137   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.873148   80620 round_trippers.go:580]     Audit-Id: 356034ea-3c99-4375-a746-070c2cc9db4c
	I0223 22:22:04.873157   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.873164   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.873172   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.873182   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.873192   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.873417   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:04.873740   80620 node_ready.go:49] node "multinode-773885" has status "Ready":"True"
	I0223 22:22:04.873759   80620 node_ready.go:38] duration metric: took 6.055088164s waiting for node "multinode-773885" to be "Ready" ...
	I0223 22:22:04.873768   80620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:22:04.873821   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:04.873828   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.873836   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.873842   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.877171   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:04.877190   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.877199   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.877209   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.877217   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.877225   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.877234   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.877242   80620 round_trippers.go:580]     Audit-Id: ea2e3ce7-5ec8-4de8-affe-00217b9f0f75
	I0223 22:22:04.878185   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"788"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83657 chars]
	I0223 22:22:04.880661   80620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:04.880721   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:04.880729   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.880736   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.880743   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.882620   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:04.882637   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.882643   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.882649   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.882654   80620 round_trippers.go:580]     Audit-Id: b8c34b52-e089-4d20-abac-792cd26a154e
	I0223 22:22:04.882660   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.882665   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.882671   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.882780   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:04.883130   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.883141   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.883148   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.883154   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.885545   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.885559   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.885566   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.885571   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.885577   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.885582   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.885590   80620 round_trippers.go:580]     Audit-Id: a935859f-b8a0-4ddc-8ffe-b88f374b4617
	I0223 22:22:04.885597   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.885668   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:05.386735   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:05.386762   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.386775   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.386785   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.389024   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:05.389044   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.389055   80620 round_trippers.go:580]     Audit-Id: 5162732a-6a2d-4976-bd1a-d7a30dbd6874
	I0223 22:22:05.389063   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.389070   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.389082   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.389095   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.389103   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.389223   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:05.389693   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:05.389706   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.389713   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.389722   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.391445   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:05.391462   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.391469   80620 round_trippers.go:580]     Audit-Id: 152ffe10-665f-45a2-8a81-8746544ba57e
	I0223 22:22:05.391475   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.391482   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.391491   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.391501   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.391511   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.391627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:05.886225   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:05.886248   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.886257   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.886264   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.888353   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:05.888389   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.888399   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.888408   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.888417   80620 round_trippers.go:580]     Audit-Id: cc5f0143-2508-446f-907a-56ab533f7430
	I0223 22:22:05.888426   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.888438   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.888446   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.889024   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:05.889458   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:05.889469   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.889476   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.889484   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.891242   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:05.891257   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.891263   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.891269   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.891275   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.891283   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.891293   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.891319   80620 round_trippers.go:580]     Audit-Id: ee3b00fc-914b-4eba-8a45-e4597d8f6d25
	I0223 22:22:05.891627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.386281   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:06.386303   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.386311   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.386326   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.388974   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:06.388992   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.388999   80620 round_trippers.go:580]     Audit-Id: 220c9abc-71ea-4bf1-984a-8b6e023377f1
	I0223 22:22:06.389014   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.389026   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.389038   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.389046   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.389052   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.389842   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:06.390308   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:06.390321   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.390328   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.390337   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.391935   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:06.391953   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.391962   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.391970   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.391980   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.391989   80620 round_trippers.go:580]     Audit-Id: 7685b789-c707-4d17-88af-7145585bce78
	I0223 22:22:06.391998   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.392010   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.392362   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.886127   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:06.886150   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.886159   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.886165   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.889975   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:06.890001   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.890013   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.890023   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.890035   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.890048   80620 round_trippers.go:580]     Audit-Id: 87848966-24d5-45b3-a7aa-56f65410f508
	I0223 22:22:06.890057   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.890070   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.890267   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:06.890721   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:06.890734   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.890741   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.890747   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.895655   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:06.895674   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.895684   80620 round_trippers.go:580]     Audit-Id: f054bb7d-1199-4b8d-b3f0-4c0274f1d63d
	I0223 22:22:06.895693   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.895702   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.895713   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.895724   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.895736   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.896139   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.896420   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:07.386841   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:07.386862   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.386871   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.386878   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.389998   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:07.390025   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.390036   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.390046   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.390054   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.390062   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.390070   80620 round_trippers.go:580]     Audit-Id: d6b7ea92-112f-499d-a61b-86d8245e8558
	I0223 22:22:07.390078   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.390244   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:07.390679   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:07.390690   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.390698   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.390704   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.392927   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:07.392948   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.392958   80620 round_trippers.go:580]     Audit-Id: e7498617-1172-42fd-b07a-d2d628e52a21
	I0223 22:22:07.392969   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.392988   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.393002   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.393011   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.393022   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.393607   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:07.886231   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:07.886254   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.886277   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.886284   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.889328   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:07.889351   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.889359   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.889366   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.889371   80620 round_trippers.go:580]     Audit-Id: 996a8d26-ab61-4eb1-a206-c0fb32514e06
	I0223 22:22:07.889377   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.889382   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.889388   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.889970   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:07.890413   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:07.890425   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.890432   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.890439   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.897920   80620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0223 22:22:07.897934   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.897941   80620 round_trippers.go:580]     Audit-Id: 4221b7db-ff10-4443-aed5-78c6f7b9296c
	I0223 22:22:07.897947   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.897953   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.897958   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.897966   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.897972   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.898379   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:08.386191   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:08.386213   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.386224   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.386234   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.388618   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.388637   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.388644   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.388652   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.388660   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.388668   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.388689   80620 round_trippers.go:580]     Audit-Id: 9fd3f354-aaea-4470-b0a9-a62bb9cf4b81
	I0223 22:22:08.388695   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.389016   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:08.389462   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:08.389474   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.389484   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.389493   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.391347   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:08.391366   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.391376   80620 round_trippers.go:580]     Audit-Id: d2b922bc-cc07-4d6a-a919-5b81247f7675
	I0223 22:22:08.391385   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.391396   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.391405   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.391414   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.391419   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.391692   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:08.886358   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:08.886387   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.886397   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.886403   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.889174   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.889200   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.889209   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.889215   80620 round_trippers.go:580]     Audit-Id: 7d35bf13-e46b-4b70-b379-eef2287d1352
	I0223 22:22:08.889220   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.889226   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.889231   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.889236   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.889437   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:08.889910   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:08.889923   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.889931   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.889937   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.892893   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.892908   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.892914   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.892919   80620 round_trippers.go:580]     Audit-Id: c156c99d-e130-4f55-b4e3-14616a7ba70f
	I0223 22:22:08.892927   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.892936   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.892945   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.892956   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.893597   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:09.386240   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:09.386263   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.386272   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.386278   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.388959   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.388983   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.388991   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.388997   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.389002   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.389007   80620 round_trippers.go:580]     Audit-Id: b1b9610c-e081-4bbb-837e-8be581f68475
	I0223 22:22:09.389013   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.389018   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.389296   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:09.389849   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:09.389877   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.389888   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.389895   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.391871   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:09.391888   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.391895   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.391900   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.391906   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.391911   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.391916   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.391930   80620 round_trippers.go:580]     Audit-Id: 002294de-1a26-4570-886e-0a7800195800
	I0223 22:22:09.392074   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:09.392445   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:09.886775   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:09.886796   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.886805   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.886812   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.889680   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.889703   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.889710   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.889716   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.889722   80620 round_trippers.go:580]     Audit-Id: 3a94f330-f28f-46c4-a648-51998b06aed1
	I0223 22:22:09.889730   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.889740   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.889749   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.889960   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:09.890412   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:09.890426   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.890433   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.890439   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.893112   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.893124   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.893131   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.893136   80620 round_trippers.go:580]     Audit-Id: f1b19073-36ac-4a4c-b6c5-aa4b69ec1776
	I0223 22:22:09.893141   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.893148   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.893156   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.893165   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.893436   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:10.386076   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:10.386100   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.386109   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.386115   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.388462   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:10.388484   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.388491   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.388497   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.388502   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.388508   80620 round_trippers.go:580]     Audit-Id: b0c0f970-513c-4958-8f0f-9012dbfa36d5
	I0223 22:22:10.388513   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.388518   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.388755   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:10.389295   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:10.389312   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.389323   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.389333   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.391529   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:10.391550   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.391560   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.391568   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.391574   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.391582   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.391587   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.391593   80620 round_trippers.go:580]     Audit-Id: 10261026-5803-485c-834a-bf21f0cb79e3
	I0223 22:22:10.391676   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:10.886276   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:10.886298   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.886310   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.886319   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.890190   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:10.890215   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.890222   80620 round_trippers.go:580]     Audit-Id: b6386ff9-de93-4709-b3ef-d903d0d5a9cc
	I0223 22:22:10.890228   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.890234   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.890239   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.890245   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.890251   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.890402   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0223 22:22:10.890869   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:10.890883   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.890893   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.890902   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.895016   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:10.895035   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.895046   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.895055   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.895064   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.895073   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.895080   80620 round_trippers.go:580]     Audit-Id: 2e664d84-586c-4ab6-94bc-ba77835a654d
	I0223 22:22:10.895085   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.895436   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.386154   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:11.386182   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.386193   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.386202   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.388774   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.388795   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.388805   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.388814   80620 round_trippers.go:580]     Audit-Id: 0b53d934-8f77-4a2f-bbe6-92be4d3d5c17
	I0223 22:22:11.388822   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.388831   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.388848   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.388858   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.389048   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0223 22:22:11.389509   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.389522   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.389532   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.389541   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.391436   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:11.391458   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.391475   80620 round_trippers.go:580]     Audit-Id: f0d5469c-1828-43e0-99ac-880d59c5ca18
	I0223 22:22:11.391486   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.391496   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.391502   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.391508   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.391514   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.392144   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.392489   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:11.886705   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:11.886728   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.886740   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.886747   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.897949   80620 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0223 22:22:11.897972   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.897979   80620 round_trippers.go:580]     Audit-Id: ee3fad82-cb14-466d-be80-d787cdfe18c6
	I0223 22:22:11.897988   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.897996   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.898005   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.898014   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.898023   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.898203   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0223 22:22:11.898695   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.898709   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.898716   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.898722   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.901522   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.901537   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.901546   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.901555   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.901565   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.901574   80620 round_trippers.go:580]     Audit-Id: 67ab3f98-4824-4d37-9baa-d6fde6241cd3
	I0223 22:22:11.901583   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.901592   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.901884   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.902261   80620 pod_ready.go:92] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.902281   80620 pod_ready.go:81] duration metric: took 7.021599209s waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.902292   80620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.902345   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
	I0223 22:22:11.902362   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.902374   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.902387   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.905539   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:11.905555   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.905564   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.905573   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.905584   80620 round_trippers.go:580]     Audit-Id: b11ef536-b4c5-482e-aa7c-76d59636d5d2
	I0223 22:22:11.905592   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.905600   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.905608   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.906366   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"802","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6065 chars]
	I0223 22:22:11.906856   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.906876   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.906892   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.906903   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.908814   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:11.908827   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.908833   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.908838   80620 round_trippers.go:580]     Audit-Id: afa24933-99a3-4732-ab8c-89f796285545
	I0223 22:22:11.908844   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.908849   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.908860   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.908868   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.909140   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.909495   80620 pod_ready.go:92] pod "etcd-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.909509   80620 pod_ready.go:81] duration metric: took 7.209083ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.909528   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.909582   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
	I0223 22:22:11.909592   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.909603   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.909616   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.911700   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.911720   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.911729   80620 round_trippers.go:580]     Audit-Id: 779ea438-bd06-40b6-ba45-805cc766e96d
	I0223 22:22:11.911737   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.911745   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.911754   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.911762   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.911772   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.911987   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"793","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7599 chars]
	I0223 22:22:11.912445   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.912459   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.912475   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.912485   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.914590   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.914610   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.914619   80620 round_trippers.go:580]     Audit-Id: 05b9d526-86d7-43a1-a29b-8b19eb1394d1
	I0223 22:22:11.914628   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.914637   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.914659   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.914670   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.914685   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.914841   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.915184   80620 pod_ready.go:92] pod "kube-apiserver-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.915198   80620 pod_ready.go:81] duration metric: took 5.656927ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.915207   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.915261   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
	I0223 22:22:11.915271   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.915282   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.915294   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.917370   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.917390   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.917400   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.917407   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.917416   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.917424   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.917434   80620 round_trippers.go:580]     Audit-Id: 1c6ec0cd-a712-46c0-9127-fc5aaaf54dca
	I0223 22:22:11.917444   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.917666   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"825","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7162 chars]
	I0223 22:22:11.918056   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.918067   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.918078   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.918090   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.920329   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.920349   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.920359   80620 round_trippers.go:580]     Audit-Id: 4abce7c0-9628-4d94-8005-2a2dfc23a6e7
	I0223 22:22:11.920367   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.920377   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.920386   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.920394   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.920410   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.921292   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.921655   80620 pod_ready.go:92] pod "kube-controller-manager-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.921672   80620 pod_ready.go:81] duration metric: took 6.456858ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.921682   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.921744   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:22:11.921759   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.921770   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.921788   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.923979   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.923999   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.924008   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.924016   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.924024   80620 round_trippers.go:580]     Audit-Id: 0efbb785-cf58-48c7-81ba-79e7df1fffe6
	I0223 22:22:11.924037   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.924045   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.924054   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.924324   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0223 22:22:11.924642   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:22:11.924651   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.924659   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.924668   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.927145   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.927164   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.927174   80620 round_trippers.go:580]     Audit-Id: d525fadc-555c-4d29-8ba1-8f98e144287a
	I0223 22:22:11.927190   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.927201   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.927209   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.927221   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.927230   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.927662   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0223 22:22:11.927907   80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.927917   80620 pod_ready.go:81] duration metric: took 6.229355ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.927924   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.087372   80620 request.go:622] Waited for 159.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:22:12.087472   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:22:12.087484   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.087494   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.087506   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.090953   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.090975   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.090982   80620 round_trippers.go:580]     Audit-Id: d476c971-82f9-4e13-bf24-ac1d0a7e0132
	I0223 22:22:12.090988   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.091000   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.091015   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.091023   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.091034   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.091257   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"751","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I0223 22:22:12.287106   80620 request.go:622] Waited for 195.345935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:12.287171   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:12.287176   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.287184   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.287190   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.290450   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.290482   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.290493   80620 round_trippers.go:580]     Audit-Id: 293be0f3-4481-47c8-8397-f5bcd5d19b91
	I0223 22:22:12.290503   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.290511   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.290527   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.290541   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.290550   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.290685   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:12.290991   80620 pod_ready.go:92] pod "kube-proxy-mdjks" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:12.291002   80620 pod_ready.go:81] duration metric: took 363.073923ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.291011   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.487380   80620 request.go:622] Waited for 196.297867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:22:12.487451   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:22:12.487455   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.487463   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.487470   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.490351   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:12.490369   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.490376   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.490382   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.490390   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.490396   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.490402   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.490408   80620 round_trippers.go:580]     Audit-Id: 3101849d-f3a0-4ede-99b6-2a380cea5ba6
	I0223 22:22:12.490636   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0223 22:22:12.687374   80620 request.go:622] Waited for 196.32053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:22:12.687452   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:22:12.687458   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.687466   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.687472   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.690923   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.690945   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.690952   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.690958   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.690963   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.690969   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.690975   80620 round_trippers.go:580]     Audit-Id: f8604e33-edeb-42ae-8e19-5e27a6bd8d7d
	I0223 22:22:12.690980   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.693472   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
	I0223 22:22:12.693842   80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:12.693857   80620 pod_ready.go:81] duration metric: took 402.838971ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.693868   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.886856   80620 request.go:622] Waited for 192.90851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:22:12.886917   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:22:12.886932   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.886943   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.886952   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.893080   80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 22:22:12.893102   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.893109   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.893115   80620 round_trippers.go:580]     Audit-Id: 854e2fd9-4c25-4b2f-bc59-61d21fabfb74
	I0223 22:22:12.893120   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.893125   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.893131   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.893136   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.893332   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"786","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4892 chars]
	I0223 22:22:13.087065   80620 request.go:622] Waited for 193.332526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:13.087127   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:13.087133   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.087143   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.087153   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.091144   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:13.091162   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.091169   80620 round_trippers.go:580]     Audit-Id: bf568af1-d7fc-4da0-9559-42a27fc0cef3
	I0223 22:22:13.091175   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.091181   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.091186   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.091198   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.091210   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.091630   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:13.091948   80620 pod_ready.go:92] pod "kube-scheduler-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:13.091980   80620 pod_ready.go:81] duration metric: took 398.085634ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:13.091998   80620 pod_ready.go:38] duration metric: took 8.218220101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:22:13.092020   80620 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:22:13.092066   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:22:13.104775   80620 command_runner.go:130] > 1675
	I0223 22:22:13.104818   80620 api_server.go:71] duration metric: took 14.412044719s to wait for apiserver process to appear ...
	I0223 22:22:13.104835   80620 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:22:13.104847   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:22:13.110111   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:22:13.110176   80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0223 22:22:13.110187   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.110206   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.110217   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.110872   80620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0223 22:22:13.110888   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.110895   80620 round_trippers.go:580]     Audit-Id: 4f7ff6ce-bed0-47c2-918d-6dd15db9ce31
	I0223 22:22:13.110901   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.110906   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.110911   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.110918   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.110923   80620 round_trippers.go:580]     Content-Length: 263
	I0223 22:22:13.110930   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.110950   80620 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:22:13.111007   80620 api_server.go:140] control plane version: v1.26.1
	I0223 22:22:13.111018   80620 api_server.go:130] duration metric: took 6.177354ms to wait for apiserver health ...
	I0223 22:22:13.111024   80620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:22:13.287730   80620 request.go:622] Waited for 176.607463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.287780   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.287784   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.287794   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.287804   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.292061   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:13.292080   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.292087   80620 round_trippers.go:580]     Audit-Id: 8f903081-07eb-4386-b54e-2c988265836f
	I0223 22:22:13.292096   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.292104   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.292110   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.292116   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.292121   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.294183   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
	I0223 22:22:13.296686   80620 system_pods.go:59] 12 kube-system pods found
	I0223 22:22:13.296706   80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:22:13.296711   80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
	I0223 22:22:13.296715   80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:22:13.296719   80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:22:13.296723   80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:22:13.296727   80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:22:13.296731   80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
	I0223 22:22:13.296737   80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:22:13.296741   80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:22:13.296745   80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:22:13.296750   80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
	I0223 22:22:13.296754   80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:22:13.296759   80620 system_pods.go:74] duration metric: took 185.729884ms to wait for pod list to return data ...
	I0223 22:22:13.296768   80620 default_sa.go:34] waiting for default service account to be created ...
	I0223 22:22:13.487059   80620 request.go:622] Waited for 190.213748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:22:13.487142   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:22:13.487151   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.487163   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.487179   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.490660   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:13.490686   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.490698   80620 round_trippers.go:580]     Content-Length: 261
	I0223 22:22:13.490707   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.490715   80620 round_trippers.go:580]     Audit-Id: b33f914f-7659-4fc8-8f76-26f7e677ba77
	I0223 22:22:13.490724   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.490733   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.490746   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.490755   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.490784   80620 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"860"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"62ac0740-2090-4217-a812-0d7ea88a967e","resourceVersion":"301","creationTimestamp":"2023-02-23T22:17:49Z"}}]}
	I0223 22:22:13.491028   80620 default_sa.go:45] found service account: "default"
	I0223 22:22:13.491048   80620 default_sa.go:55] duration metric: took 194.273065ms for default service account to be created ...
	I0223 22:22:13.491059   80620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 22:22:13.687553   80620 request.go:622] Waited for 196.395892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.687624   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.687630   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.687642   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.687659   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.691923   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:13.691949   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.691960   80620 round_trippers.go:580]     Audit-Id: b99f1d26-3de6-4548-9948-e1ef63d9e02a
	I0223 22:22:13.691969   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.691980   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.691988   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.691997   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.692005   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.693522   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
	I0223 22:22:13.695955   80620 system_pods.go:86] 12 kube-system pods found
	I0223 22:22:13.695978   80620 system_pods.go:89] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:22:13.695985   80620 system_pods.go:89] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
	I0223 22:22:13.695993   80620 system_pods.go:89] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:22:13.695999   80620 system_pods.go:89] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:22:13.696005   80620 system_pods.go:89] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:22:13.696012   80620 system_pods.go:89] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:22:13.696020   80620 system_pods.go:89] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
	I0223 22:22:13.696028   80620 system_pods.go:89] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:22:13.696040   80620 system_pods.go:89] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:22:13.696048   80620 system_pods.go:89] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:22:13.696055   80620 system_pods.go:89] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
	I0223 22:22:13.696061   80620 system_pods.go:89] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:22:13.696071   80620 system_pods.go:126] duration metric: took 205.005964ms to wait for k8s-apps to be running ...
	I0223 22:22:13.696085   80620 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:22:13.696135   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:22:13.709623   80620 system_svc.go:56] duration metric: took 13.531533ms WaitForService to wait for kubelet.
	I0223 22:22:13.709679   80620 kubeadm.go:578] duration metric: took 15.016875282s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:22:13.709713   80620 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:22:13.887138   80620 request.go:622] Waited for 177.351024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0223 22:22:13.887250   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0223 22:22:13.887261   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.887269   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.887276   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.889579   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:13.889601   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.889608   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.889614   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.889620   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.889625   80620 round_trippers.go:580]     Audit-Id: 4402b5a7-68c0-489c-bf87-bedbd28a14fe
	I0223 22:22:13.889631   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.889636   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.889855   80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16192 chars]
	I0223 22:22:13.890436   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890455   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890468   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890474   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890481   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890489   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890496   80620 node_conditions.go:105] duration metric: took 180.777399ms to run NodePressure ...
	I0223 22:22:13.890512   80620 start.go:228] waiting for startup goroutines ...
	I0223 22:22:13.890522   80620 start.go:233] waiting for cluster config update ...
	I0223 22:22:13.890533   80620 start.go:242] writing updated cluster config ...
	I0223 22:22:13.890966   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:13.891077   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:13.893728   80620 out.go:177] * Starting worker node multinode-773885-m02 in cluster multinode-773885
	I0223 22:22:13.895212   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:22:13.895236   80620 cache.go:57] Caching tarball of preloaded images
	I0223 22:22:13.895333   80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:22:13.895345   80620 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:22:13.895468   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:13.895625   80620 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:22:13.895655   80620 start.go:364] acquiring machines lock for multinode-773885-m02: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 22:22:13.895705   80620 start.go:368] acquired machines lock for "multinode-773885-m02" in 30.081µs
	I0223 22:22:13.895724   80620 start.go:96] Skipping create...Using existing machine configuration
	I0223 22:22:13.895732   80620 fix.go:55] fixHost starting: m02
	I0223 22:22:13.896010   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:13.896038   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:13.910341   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0223 22:22:13.910796   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:13.911318   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:22:13.911343   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:13.911672   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:13.911860   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:13.911979   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
	I0223 22:22:13.913566   80620 fix.go:103] recreateIfNeeded on multinode-773885-m02: state=Stopped err=<nil>
	I0223 22:22:13.913585   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	W0223 22:22:13.913746   80620 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 22:22:13.915708   80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885-m02" ...
	I0223 22:22:13.917009   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .Start
	I0223 22:22:13.917151   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring networks are active...
	I0223 22:22:13.917783   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network default is active
	I0223 22:22:13.918134   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network mk-multinode-773885 is active
	I0223 22:22:13.918457   80620 main.go:141] libmachine: (multinode-773885-m02) Getting domain xml...
	I0223 22:22:13.919047   80620 main.go:141] libmachine: (multinode-773885-m02) Creating domain...
	I0223 22:22:15.148655   80620 main.go:141] libmachine: (multinode-773885-m02) Waiting to get IP...
	I0223 22:22:15.149521   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.149889   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.149974   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.149904   80738 retry.go:31] will retry after 193.258579ms: waiting for machine to come up
	I0223 22:22:15.344335   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.344701   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.344731   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.344650   80738 retry.go:31] will retry after 325.897575ms: waiting for machine to come up
	I0223 22:22:15.672194   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.672594   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.672628   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.672550   80738 retry.go:31] will retry after 464.389068ms: waiting for machine to come up
	I0223 22:22:16.138184   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:16.138690   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:16.138753   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.138682   80738 retry.go:31] will retry after 418.748231ms: waiting for machine to come up
	I0223 22:22:16.559096   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:16.559605   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:16.559635   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.559550   80738 retry.go:31] will retry after 471.42311ms: waiting for machine to come up
	I0223 22:22:17.033003   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:17.033388   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:17.033425   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.033349   80738 retry.go:31] will retry after 716.223287ms: waiting for machine to come up
	I0223 22:22:17.751192   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:17.751627   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:17.751662   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.751564   80738 retry.go:31] will retry after 829.526019ms: waiting for machine to come up
	I0223 22:22:18.582469   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:18.582861   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:18.582893   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:18.582810   80738 retry.go:31] will retry after 1.314736274s: waiting for machine to come up
	I0223 22:22:19.898527   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:19.898968   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:19.898996   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:19.898923   80738 retry.go:31] will retry after 1.848898641s: waiting for machine to come up
	I0223 22:22:21.749410   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:21.749799   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:21.749831   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:21.749746   80738 retry.go:31] will retry after 1.422968619s: waiting for machine to come up
	I0223 22:22:23.174280   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:23.174762   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:23.174796   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:23.174689   80738 retry.go:31] will retry after 2.26457317s: waiting for machine to come up
	I0223 22:22:25.440649   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:25.441040   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:25.441077   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:25.441025   80738 retry.go:31] will retry after 2.412299301s: waiting for machine to come up
	I0223 22:22:27.856562   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:27.857000   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:27.857029   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:27.856943   80738 retry.go:31] will retry after 3.510265055s: waiting for machine to come up
	I0223 22:22:31.369182   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.369590   80620 main.go:141] libmachine: (multinode-773885-m02) Found IP for machine: 192.168.39.102
	I0223 22:22:31.369622   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.369632   80620 main.go:141] libmachine: (multinode-773885-m02) Reserving static IP address...
	I0223 22:22:31.370012   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.370035   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"}
	I0223 22:22:31.370045   80620 main.go:141] libmachine: (multinode-773885-m02) Reserved static IP address: 192.168.39.102
	I0223 22:22:31.370056   80620 main.go:141] libmachine: (multinode-773885-m02) Waiting for SSH to be available...
	I0223 22:22:31.370068   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Getting to WaitForSSH function...
	I0223 22:22:31.372076   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.372417   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.372440   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.372551   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH client type: external
	I0223 22:22:31.372572   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa (-rw-------)
	I0223 22:22:31.372608   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 22:22:31.372622   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | About to run SSH command:
	I0223 22:22:31.372638   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | exit 0
	I0223 22:22:31.506747   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | SSH cmd err, output: <nil>: 
	I0223 22:22:31.507041   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetConfigRaw
	I0223 22:22:31.507719   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:31.510014   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.510356   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.510390   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.510652   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:31.510883   80620 machine.go:88] provisioning docker machine ...
	I0223 22:22:31.510909   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:31.511142   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.511321   80620 buildroot.go:166] provisioning hostname "multinode-773885-m02"
	I0223 22:22:31.511339   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.511489   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.513584   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.513939   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.513969   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.514122   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:31.514268   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.514404   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.514532   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:31.514655   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:31.515234   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:31.515255   80620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773885-m02 && echo "multinode-773885-m02" | sudo tee /etc/hostname
	I0223 22:22:31.655693   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885-m02
	
	I0223 22:22:31.655725   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.658407   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.658788   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.658815   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.658999   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:31.659184   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.659347   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.659464   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:31.659613   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:31.660176   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:31.660212   80620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773885-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773885-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:22:31.799792   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:22:31.799859   80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
	I0223 22:22:31.799879   80620 buildroot.go:174] setting up certificates
	I0223 22:22:31.799889   80620 provision.go:83] configureAuth start
	I0223 22:22:31.799902   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.800252   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:31.803534   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.803989   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.804018   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.804274   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.806753   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.807088   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.807121   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.807237   80620 provision.go:138] copyHostCerts
	I0223 22:22:31.807268   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:22:31.807311   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
	I0223 22:22:31.807324   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:22:31.807414   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
	I0223 22:22:31.807572   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:22:31.807597   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
	I0223 22:22:31.807602   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:22:31.807632   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
	I0223 22:22:31.807685   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:22:31.807702   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
	I0223 22:22:31.807707   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:22:31.807729   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
	I0223 22:22:31.807773   80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885-m02 san=[192.168.39.102 192.168.39.102 localhost 127.0.0.1 minikube multinode-773885-m02]
	I0223 22:22:32.063720   80620 provision.go:172] copyRemoteCerts
	I0223 22:22:32.063776   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:22:32.063800   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.066310   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.066712   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.066742   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.066876   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.067090   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.067230   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.067359   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:32.161807   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:22:32.161874   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 22:22:32.184819   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:22:32.184883   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 22:22:32.206537   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:22:32.206625   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 22:22:32.228031   80620 provision.go:86] duration metric: configureAuth took 428.129514ms
	I0223 22:22:32.228052   80620 buildroot.go:189] setting minikube options for container-runtime
	I0223 22:22:32.228295   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:32.228322   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:32.228634   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.231144   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.231489   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.231520   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.231601   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.231819   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.231999   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.232117   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.232312   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.232708   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.232719   80620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:22:32.365102   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0223 22:22:32.365122   80620 buildroot.go:70] root file system type: tmpfs
	I0223 22:22:32.365241   80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:22:32.365265   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.367818   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.368241   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.368263   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.368492   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.368703   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.368872   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.368982   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.369180   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.369581   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.369639   80620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.240"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:22:32.513495   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.240
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:22:32.513523   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.515906   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.516266   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.516300   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.516468   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.516680   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.516873   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.517028   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.517178   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.517625   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.517648   80620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:22:33.354684   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0223 22:22:33.354711   80620 machine.go:91] provisioned docker machine in 1.843811829s
	I0223 22:22:33.354721   80620 start.go:300] post-start starting for "multinode-773885-m02" (driver="kvm2")
	I0223 22:22:33.354729   80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:22:33.354752   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.355077   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:22:33.355108   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.357808   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.358150   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.358170   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.358307   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.358509   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.358697   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.358856   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.452337   80620 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:22:33.456207   80620 command_runner.go:130] > NAME=Buildroot
	I0223 22:22:33.456227   80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0223 22:22:33.456233   80620 command_runner.go:130] > ID=buildroot
	I0223 22:22:33.456241   80620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0223 22:22:33.456248   80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0223 22:22:33.456287   80620 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 22:22:33.456303   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
	I0223 22:22:33.456371   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
	I0223 22:22:33.456462   80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
	I0223 22:22:33.456474   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
	I0223 22:22:33.456577   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:22:33.464384   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:22:33.486196   80620 start.go:303] post-start completed in 131.456152ms
	I0223 22:22:33.486221   80620 fix.go:57] fixHost completed within 19.590489491s
	I0223 22:22:33.486246   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.488925   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.489233   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.489259   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.489444   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.489642   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.489819   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.489958   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.490087   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:33.490502   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:33.490517   80620 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0223 22:22:33.619595   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190953.568894594
	
	I0223 22:22:33.619615   80620 fix.go:207] guest clock: 1677190953.568894594
	I0223 22:22:33.619622   80620 fix.go:220] Guest: 2023-02-23 22:22:33.568894594 +0000 UTC Remote: 2023-02-23 22:22:33.48622588 +0000 UTC m=+80.262153220 (delta=82.668714ms)
	I0223 22:22:33.619636   80620 fix.go:191] guest clock delta is within tolerance: 82.668714ms
	I0223 22:22:33.619643   80620 start.go:83] releasing machines lock for "multinode-773885-m02", held for 19.723927358s
	I0223 22:22:33.619668   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.619923   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:33.622598   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.623025   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.623058   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.625082   80620 out.go:177] * Found network options:
	I0223 22:22:33.626668   80620 out.go:177]   - NO_PROXY=192.168.39.240
	W0223 22:22:33.628011   80620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:22:33.628044   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628608   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628794   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628886   80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:22:33.628929   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	W0223 22:22:33.629039   80620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:22:33.629123   80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:22:33.629150   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.631754   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.631877   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632173   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.632199   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632233   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.632253   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632406   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.632530   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.632612   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.632687   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.632797   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.632952   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.632945   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.633068   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.747533   80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:22:33.748590   80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0223 22:22:33.748617   80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 22:22:33.748665   80620 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:22:33.752644   80620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:22:33.752772   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:22:33.762613   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:22:33.779129   80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:22:33.794495   80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0223 22:22:33.794614   80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 22:22:33.794634   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:22:33.794710   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:22:33.819645   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:22:33.819665   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:22:33.819671   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:22:33.819676   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:22:33.819680   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:22:33.819684   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:22:33.819688   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:22:33.819694   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:22:33.819697   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:22:33.819702   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:22:33.819707   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:22:33.821344   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:22:33.821366   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:22:33.821378   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:22:33.821513   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:22:33.838092   80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:22:33.838113   80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:22:33.838173   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:22:33.849104   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:22:33.860042   80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:22:33.860082   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:22:33.871017   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:22:33.881892   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:22:33.892548   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:22:33.903374   80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:22:33.914628   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:22:33.925877   80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:22:33.935581   80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:22:33.935636   80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:22:33.945618   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:34.050114   80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:22:34.068154   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:22:34.068229   80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:22:34.089986   80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0223 22:22:34.090009   80620 command_runner.go:130] > [Unit]
	I0223 22:22:34.090019   80620 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:22:34.090033   80620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:22:34.090041   80620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0223 22:22:34.090049   80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0223 22:22:34.090056   80620 command_runner.go:130] > StartLimitBurst=3
	I0223 22:22:34.090063   80620 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:22:34.090072   80620 command_runner.go:130] > [Service]
	I0223 22:22:34.090083   80620 command_runner.go:130] > Type=notify
	I0223 22:22:34.090089   80620 command_runner.go:130] > Restart=on-failure
	I0223 22:22:34.090104   80620 command_runner.go:130] > Environment=NO_PROXY=192.168.39.240
	I0223 22:22:34.090111   80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:22:34.090118   80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:22:34.090150   80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:22:34.090164   80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:22:34.090170   80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:22:34.090176   80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:22:34.090182   80620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:22:34.090190   80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:22:34.090196   80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:22:34.090200   80620 command_runner.go:130] > ExecStart=
	I0223 22:22:34.090213   80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0223 22:22:34.090219   80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:22:34.090224   80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:22:34.090233   80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:22:34.090237   80620 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:22:34.090241   80620 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:22:34.090245   80620 command_runner.go:130] > LimitCORE=infinity
	I0223 22:22:34.090251   80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:22:34.090256   80620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:22:34.090260   80620 command_runner.go:130] > TasksMax=infinity
	I0223 22:22:34.090265   80620 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:22:34.090273   80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:22:34.090279   80620 command_runner.go:130] > Delegate=yes
	I0223 22:22:34.090285   80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:22:34.090293   80620 command_runner.go:130] > KillMode=process
	I0223 22:22:34.090297   80620 command_runner.go:130] > [Install]
	I0223 22:22:34.090302   80620 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:22:34.090359   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:22:34.105030   80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0223 22:22:34.126591   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:22:34.140060   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:22:34.153929   80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 22:22:34.184699   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:22:34.197888   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:22:34.214560   80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:22:34.214588   80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:22:34.214922   80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:22:34.314415   80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:22:34.423777   80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:22:34.423812   80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:22:34.439350   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:34.539377   80620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:22:35.976151   80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.436733266s)
	I0223 22:22:35.976218   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:22:36.088366   80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:22:36.208338   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:22:36.318554   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:36.423882   80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:22:36.438700   80620 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0223 22:22:36.441277   80620 out.go:177] 
	W0223 22:22:36.442813   80620 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0223 22:22:36.442833   80620 out.go:239] * 
	* 
	W0223 22:22:36.443730   80620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 22:22:36.445382   80620 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-773885" : exit status 90
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-773885
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-773885 -n multinode-773885
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-773885 logs -n 25: (1.31682533s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885:/home/docker/cp-test_multinode-773885-m02_multinode-773885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885 sudo cat                                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m02_multinode-773885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03:/home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885-m03 sudo cat                                   | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp testdata/cp-test.txt                                                | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885:/home/docker/cp-test_multinode-773885-m03_multinode-773885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885 sudo cat                                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m03_multinode-773885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02:/home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885-m02 sudo cat                                   | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-773885 node stop m03                                                          | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	| node    | multinode-773885 node start                                                             | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-773885                                                                | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC |                     |
	| stop    | -p multinode-773885                                                                     | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:21 UTC |
	| start   | -p multinode-773885                                                                     | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:21 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-773885                                                                | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 22:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 22:21:13.262206   80620 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:21:13.262485   80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:21:13.262530   80620 out.go:309] Setting ErrFile to fd 2...
	I0223 22:21:13.262547   80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:21:13.263007   80620 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:21:13.263577   80620 out.go:303] Setting JSON to false
	I0223 22:21:13.264336   80620 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7426,"bootTime":1677183448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:21:13.264396   80620 start.go:135] virtualization: kvm guest
	I0223 22:21:13.267622   80620 out.go:177] * [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:21:13.268914   80620 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:21:13.268968   80620 notify.go:220] Checking for updates...
	I0223 22:21:13.270444   80620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:21:13.271889   80620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:13.273288   80620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 22:21:13.274630   80620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:21:13.275971   80620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:21:13.277689   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:13.277751   80620 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:21:13.278270   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:21:13.278328   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:21:13.292096   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0223 22:21:13.292502   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:21:13.293077   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:21:13.293100   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:21:13.293421   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:21:13.293604   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:13.326142   80620 out.go:177] * Using the kvm2 driver based on existing profile
	I0223 22:21:13.327601   80620 start.go:296] selected driver: kvm2
	I0223 22:21:13.327615   80620 start.go:857] validating driver "kvm2" against &{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP:}
	I0223 22:21:13.327745   80620 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:21:13.327989   80620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:21:13.328051   80620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-59858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 22:21:13.341443   80620 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 22:21:13.342073   80620 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 22:21:13.342106   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:13.342116   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:13.342128   80620 start_flags.go:319] config:
	{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:21:13.342256   80620 iso.go:125] acquiring lock: {Name:mka4f25d544a3ff8c2a2fab814177dd4b23f9fc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:21:13.344079   80620 out.go:177] * Starting control plane node multinode-773885 in cluster multinode-773885
	I0223 22:21:13.345362   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:13.345394   80620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 22:21:13.345409   80620 cache.go:57] Caching tarball of preloaded images
	I0223 22:21:13.345481   80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:21:13.345493   80620 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:21:13.345663   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:21:13.345836   80620 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:21:13.345858   80620 start.go:364] acquiring machines lock for multinode-773885: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 22:21:13.345897   80620 start.go:368] acquired machines lock for "multinode-773885" in 21.539µs
	I0223 22:21:13.345910   80620 start.go:96] Skipping create...Using existing machine configuration
	I0223 22:21:13.345916   80620 fix.go:55] fixHost starting: 
	I0223 22:21:13.346182   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:21:13.346210   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:21:13.358898   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0223 22:21:13.359326   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:21:13.359874   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:21:13.359895   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:21:13.360176   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:21:13.360338   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:13.360464   80620 main.go:141] libmachine: (multinode-773885) Calling .GetState
	I0223 22:21:13.361968   80620 fix.go:103] recreateIfNeeded on multinode-773885: state=Stopped err=<nil>
	I0223 22:21:13.361991   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	W0223 22:21:13.362122   80620 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 22:21:13.364431   80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885" ...
	I0223 22:21:13.365638   80620 main.go:141] libmachine: (multinode-773885) Calling .Start
	I0223 22:21:13.365789   80620 main.go:141] libmachine: (multinode-773885) Ensuring networks are active...
	I0223 22:21:13.366413   80620 main.go:141] libmachine: (multinode-773885) Ensuring network default is active
	I0223 22:21:13.366726   80620 main.go:141] libmachine: (multinode-773885) Ensuring network mk-multinode-773885 is active
	I0223 22:21:13.367088   80620 main.go:141] libmachine: (multinode-773885) Getting domain xml...
	I0223 22:21:13.367766   80620 main.go:141] libmachine: (multinode-773885) Creating domain...
	I0223 22:21:14.564410   80620 main.go:141] libmachine: (multinode-773885) Waiting to get IP...
	I0223 22:21:14.565318   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:14.565709   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:14.565811   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.565729   80650 retry.go:31] will retry after 216.926568ms: waiting for machine to come up
	I0223 22:21:14.784224   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:14.784682   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:14.784711   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.784633   80650 retry.go:31] will retry after 249.246042ms: waiting for machine to come up
	I0223 22:21:15.035098   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.035423   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.035451   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.035397   80650 retry.go:31] will retry after 334.153469ms: waiting for machine to come up
	I0223 22:21:15.370820   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.371326   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.371360   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.371252   80650 retry.go:31] will retry after 394.396319ms: waiting for machine to come up
	I0223 22:21:15.766773   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.767259   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.767292   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.767204   80650 retry.go:31] will retry after 580.71112ms: waiting for machine to come up
	I0223 22:21:16.350049   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:16.350438   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:16.350468   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:16.350387   80650 retry.go:31] will retry after 812.475241ms: waiting for machine to come up
	I0223 22:21:17.164302   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:17.164761   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:17.164794   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:17.164713   80650 retry.go:31] will retry after 1.090615613s: waiting for machine to come up
	I0223 22:21:18.257489   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:18.257882   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:18.257949   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:18.257850   80650 retry.go:31] will retry after 1.207436911s: waiting for machine to come up
	I0223 22:21:19.467391   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:19.467804   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:19.467836   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:19.467758   80650 retry.go:31] will retry after 1.522373862s: waiting for machine to come up
	I0223 22:21:20.992569   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:20.992936   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:20.992965   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:20.992883   80650 retry.go:31] will retry after 2.133891724s: waiting for machine to come up
	I0223 22:21:23.129156   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:23.129626   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:23.129648   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:23.129597   80650 retry.go:31] will retry after 2.398257467s: waiting for machine to come up
	I0223 22:21:25.529031   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:25.529472   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:25.529508   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:25.529418   80650 retry.go:31] will retry after 2.616816039s: waiting for machine to come up
	I0223 22:21:28.149307   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:28.149703   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:28.149732   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:28.149668   80650 retry.go:31] will retry after 3.093858159s: waiting for machine to come up
	I0223 22:21:31.245491   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.245970   80620 main.go:141] libmachine: (multinode-773885) Found IP for machine: 192.168.39.240
	I0223 22:21:31.245992   80620 main.go:141] libmachine: (multinode-773885) Reserving static IP address...
	I0223 22:21:31.246035   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has current primary IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.246498   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.246523   80620 main.go:141] libmachine: (multinode-773885) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"}
	I0223 22:21:31.246531   80620 main.go:141] libmachine: (multinode-773885) Reserved static IP address: 192.168.39.240
	I0223 22:21:31.246540   80620 main.go:141] libmachine: (multinode-773885) Waiting for SSH to be available...
	I0223 22:21:31.246549   80620 main.go:141] libmachine: (multinode-773885) DBG | Getting to WaitForSSH function...
	I0223 22:21:31.248477   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.248821   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.248848   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.248945   80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH client type: external
	I0223 22:21:31.248970   80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa (-rw-------)
	I0223 22:21:31.249043   80620 main.go:141] libmachine: (multinode-773885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 22:21:31.249076   80620 main.go:141] libmachine: (multinode-773885) DBG | About to run SSH command:
	I0223 22:21:31.249094   80620 main.go:141] libmachine: (multinode-773885) DBG | exit 0
	I0223 22:21:31.338971   80620 main.go:141] libmachine: (multinode-773885) DBG | SSH cmd err, output: <nil>: 
	I0223 22:21:31.339315   80620 main.go:141] libmachine: (multinode-773885) Calling .GetConfigRaw
	I0223 22:21:31.339952   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:31.342708   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.343091   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.343112   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.343382   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:21:31.343587   80620 machine.go:88] provisioning docker machine ...
	I0223 22:21:31.343612   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:31.343856   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.344026   80620 buildroot.go:166] provisioning hostname "multinode-773885"
	I0223 22:21:31.344045   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.344189   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.346343   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.346741   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.346772   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.346912   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.347101   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.347235   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.347362   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.347563   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.347987   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.348001   80620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773885 && echo "multinode-773885" | sudo tee /etc/hostname
	I0223 22:21:31.483698   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885
	
	I0223 22:21:31.483729   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.486353   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.486705   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.486729   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.486927   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.487146   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.487349   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.487567   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.487765   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.488223   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.488247   80620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:21:31.610531   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:21:31.610563   80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
	I0223 22:21:31.610579   80620 buildroot.go:174] setting up certificates
	I0223 22:21:31.610589   80620 provision.go:83] configureAuth start
	I0223 22:21:31.610602   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.610887   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:31.613554   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.613875   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.613901   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.614087   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.616271   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.616732   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.616766   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.616828   80620 provision.go:138] copyHostCerts
	I0223 22:21:31.616880   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:21:31.616925   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
	I0223 22:21:31.616938   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:21:31.617049   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
	I0223 22:21:31.617142   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:21:31.617171   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
	I0223 22:21:31.617182   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:21:31.617225   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
	I0223 22:21:31.617338   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:21:31.617367   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
	I0223 22:21:31.617373   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:21:31.617412   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
	I0223 22:21:31.617475   80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-773885]
	I0223 22:21:31.813280   80620 provision.go:172] copyRemoteCerts
	I0223 22:21:31.813353   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:21:31.813402   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.816285   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.816679   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.816716   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.816918   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.817162   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.817351   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.817481   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:31.903913   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:21:31.904023   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 22:21:31.928843   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:21:31.928908   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 22:21:31.953083   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:21:31.953136   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:21:31.977825   80620 provision.go:86] duration metric: configureAuth took 367.222576ms
	I0223 22:21:31.977848   80620 buildroot.go:189] setting minikube options for container-runtime
	I0223 22:21:31.978069   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:31.978096   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:31.978344   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.980808   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.981196   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.981226   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.981404   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.981631   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.981794   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.981903   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.982052   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.982469   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.982488   80620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:21:32.100345   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0223 22:21:32.100366   80620 buildroot.go:70] root file system type: tmpfs
	I0223 22:21:32.100467   80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:21:32.100489   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:32.103003   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.103407   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:32.103436   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.103637   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:32.103824   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.103965   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.104148   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:32.104371   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:32.104858   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:32.104953   80620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:21:32.237312   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:21:32.237343   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:32.240081   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.240430   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:32.240481   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.240599   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:32.240764   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.240928   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.241022   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:32.241158   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:32.241558   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:32.241575   80620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:21:33.112176   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0223 22:21:33.112206   80620 machine.go:91] provisioned docker machine in 1.76860164s
	I0223 22:21:33.112216   80620 start.go:300] post-start starting for "multinode-773885" (driver="kvm2")
	I0223 22:21:33.112222   80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:21:33.112238   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.112595   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:21:33.112636   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.115711   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.116122   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.116159   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.116274   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.116476   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.116715   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.116933   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.204860   80620 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:21:33.208799   80620 command_runner.go:130] > NAME=Buildroot
	I0223 22:21:33.208819   80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0223 22:21:33.208823   80620 command_runner.go:130] > ID=buildroot
	I0223 22:21:33.208829   80620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0223 22:21:33.208833   80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0223 22:21:33.208858   80620 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 22:21:33.208867   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
	I0223 22:21:33.208924   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
	I0223 22:21:33.208996   80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
	I0223 22:21:33.209017   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
	I0223 22:21:33.209096   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:21:33.216834   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:21:33.238598   80620 start.go:303] post-start completed in 126.369412ms
	I0223 22:21:33.238618   80620 fix.go:57] fixHost completed within 19.892701007s
	I0223 22:21:33.238638   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.241628   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.242000   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.242020   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.242184   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.242377   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.242544   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.242697   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.242867   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:33.243253   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:33.243264   80620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0223 22:21:33.359558   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190893.310436860
	
	I0223 22:21:33.359587   80620 fix.go:207] guest clock: 1677190893.310436860
	I0223 22:21:33.359596   80620 fix.go:220] Guest: 2023-02-23 22:21:33.31043686 +0000 UTC Remote: 2023-02-23 22:21:33.238622371 +0000 UTC m=+20.014549698 (delta=71.814489ms)
	I0223 22:21:33.359621   80620 fix.go:191] guest clock delta is within tolerance: 71.814489ms
	I0223 22:21:33.359628   80620 start.go:83] releasing machines lock for "multinode-773885", held for 20.013722401s
	I0223 22:21:33.359654   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.359925   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:33.362448   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.362830   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.362872   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.362979   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363495   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363673   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363761   80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:21:33.363798   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.363978   80620 ssh_runner.go:195] Run: cat /version.json
	I0223 22:21:33.364008   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.366567   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.366853   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.366894   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.366918   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.367103   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.367284   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.367338   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.367363   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.367483   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.367511   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.367637   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.367796   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.367946   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.368088   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.472525   80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:21:33.472587   80620 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
	I0223 22:21:33.472717   80620 ssh_runner.go:195] Run: systemctl --version
	I0223 22:21:33.478170   80620 command_runner.go:130] > systemd 247 (247)
	I0223 22:21:33.478214   80620 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0223 22:21:33.478449   80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:21:33.483322   80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0223 22:21:33.483517   80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 22:21:33.483559   80620 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:21:33.486877   80620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:21:33.486963   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:21:33.494937   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:21:33.509789   80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:21:33.522704   80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0223 22:21:33.523037   80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 22:21:33.523053   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:33.523114   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:33.547334   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:33.547357   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:33.547366   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:33.547373   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:33.547379   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:33.547386   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:33.547393   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:33.547402   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:33.547409   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:33.547429   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:33.547437   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:33.548840   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:33.548856   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:21:33.548865   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:21:33.548962   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:21:33.565249   80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:21:33.565271   80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:21:33.565339   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:21:33.574475   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:21:33.582936   80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:21:33.582977   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:21:33.591609   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:21:33.600301   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:21:33.608920   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:21:33.617470   80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:21:33.626224   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:21:33.634536   80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:21:33.642631   80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:21:33.642679   80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:21:33.650322   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:33.748276   80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:21:33.765231   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:21:33.765298   80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:21:33.783055   80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0223 22:21:33.783552   80620 command_runner.go:130] > [Unit]
	I0223 22:21:33.783568   80620 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:21:33.783574   80620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:21:33.783579   80620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0223 22:21:33.783584   80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0223 22:21:33.783589   80620 command_runner.go:130] > StartLimitBurst=3
	I0223 22:21:33.783595   80620 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:21:33.783598   80620 command_runner.go:130] > [Service]
	I0223 22:21:33.783603   80620 command_runner.go:130] > Type=notify
	I0223 22:21:33.783607   80620 command_runner.go:130] > Restart=on-failure
	I0223 22:21:33.783614   80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:21:33.783625   80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:21:33.783631   80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:21:33.783640   80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:21:33.783647   80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:21:33.783653   80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:21:33.783660   80620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:21:33.783668   80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:21:33.783674   80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:21:33.783678   80620 command_runner.go:130] > ExecStart=
	I0223 22:21:33.783691   80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0223 22:21:33.783696   80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:21:33.783702   80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:21:33.783708   80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:21:33.783712   80620 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:21:33.783715   80620 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:21:33.783719   80620 command_runner.go:130] > LimitCORE=infinity
	I0223 22:21:33.783724   80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:21:33.783728   80620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:21:33.783733   80620 command_runner.go:130] > TasksMax=infinity
	I0223 22:21:33.783736   80620 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:21:33.783742   80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:21:33.783746   80620 command_runner.go:130] > Delegate=yes
	I0223 22:21:33.783751   80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:21:33.783755   80620 command_runner.go:130] > KillMode=process
	I0223 22:21:33.783758   80620 command_runner.go:130] > [Install]
	I0223 22:21:33.783765   80620 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:21:33.784203   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:21:33.800310   80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0223 22:21:33.820089   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:21:33.831934   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:21:33.843320   80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 22:21:33.870509   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:21:33.882768   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:21:33.898405   80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:21:33.898433   80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:21:33.898700   80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:21:33.998916   80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:21:34.101490   80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:21:34.101526   80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:21:34.117559   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:34.221898   80620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:21:35.643194   80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.421256026s)
	I0223 22:21:35.643291   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:21:35.759716   80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:21:35.863224   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:21:35.965951   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:36.072240   80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:21:36.092427   80620 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:21:36.092508   80620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:21:36.104108   80620 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:21:36.104128   80620 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:21:36.104134   80620 command_runner.go:130] > Device: 16h/22d	Inode: 814         Links: 1
	I0223 22:21:36.104143   80620 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0223 22:21:36.104156   80620 command_runner.go:130] > Access: 2023-02-23 22:21:36.038985633 +0000
	I0223 22:21:36.104168   80620 command_runner.go:130] > Modify: 2023-02-23 22:21:36.038985633 +0000
	I0223 22:21:36.104180   80620 command_runner.go:130] > Change: 2023-02-23 22:21:36.041985633 +0000
	I0223 22:21:36.104189   80620 command_runner.go:130] >  Birth: -
	I0223 22:21:36.104213   80620 start.go:553] Will wait 60s for crictl version
	I0223 22:21:36.104260   80620 ssh_runner.go:195] Run: which crictl
	I0223 22:21:36.110223   80620 command_runner.go:130] > /usr/bin/crictl
	I0223 22:21:36.110588   80620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:21:36.185549   80620 command_runner.go:130] > Version:  0.1.0
	I0223 22:21:36.185577   80620 command_runner.go:130] > RuntimeName:  docker
	I0223 22:21:36.185585   80620 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0223 22:21:36.185593   80620 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:21:36.185626   80620 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0223 22:21:36.185698   80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:21:36.217919   80620 command_runner.go:130] > 20.10.23
	I0223 22:21:36.219196   80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:21:36.248973   80620 command_runner.go:130] > 20.10.23
	I0223 22:21:36.253095   80620 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0223 22:21:36.253136   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:36.255830   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:36.256233   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:36.256260   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:36.256492   80620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0223 22:21:36.260126   80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:21:36.272218   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:36.272269   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:36.294497   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:36.294518   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:36.294523   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:36.294528   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:36.294532   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:36.294536   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:36.294541   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:36.294546   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:36.294550   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:36.294554   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:36.294558   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:36.295537   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:36.295553   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:21:36.295600   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:36.317087   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:36.317104   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:36.317109   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:36.317114   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:36.317119   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:36.317123   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:36.317127   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:36.317133   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:36.317137   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:36.317142   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:36.317149   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:36.318116   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:36.318131   80620 cache_images.go:84] Images are preloaded, skipping loading
	I0223 22:21:36.318198   80620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:21:36.351288   80620 command_runner.go:130] > cgroupfs
	I0223 22:21:36.352347   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:36.352366   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:36.352384   80620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:21:36.352404   80620 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773885 NodeName:multinode-773885 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:21:36.352535   80620 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-773885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:21:36.352608   80620 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-773885 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:21:36.352654   80620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:21:36.361734   80620 command_runner.go:130] > kubeadm
	I0223 22:21:36.361745   80620 command_runner.go:130] > kubectl
	I0223 22:21:36.361749   80620 command_runner.go:130] > kubelet
	I0223 22:21:36.361984   80620 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:21:36.362045   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 22:21:36.369631   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0223 22:21:36.384815   80620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:21:36.399471   80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0223 22:21:36.414791   80620 ssh_runner.go:195] Run: grep 192.168.39.240	control-plane.minikube.internal$ /etc/hosts
	I0223 22:21:36.418133   80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:21:36.429567   80620 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885 for IP: 192.168.39.240
	I0223 22:21:36.429596   80620 certs.go:186] acquiring lock for shared ca certs: {Name:mkb47a35d7b33f6ba829c92dc16cfaf70cb716c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:36.429732   80620 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key
	I0223 22:21:36.429768   80620 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key
	I0223 22:21:36.429863   80620 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key
	I0223 22:21:36.429933   80620 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key.ac2ca5a7
	I0223 22:21:36.429971   80620 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key
	I0223 22:21:36.429982   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 22:21:36.429999   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 22:21:36.430009   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 22:21:36.430023   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 22:21:36.430035   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:21:36.430047   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:21:36.430058   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:21:36.430070   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:21:36.430120   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem (1338 bytes)
	W0223 22:21:36.430145   80620 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927_empty.pem, impossibly tiny 0 bytes
	I0223 22:21:36.430155   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:21:36.430178   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem (1078 bytes)
	I0223 22:21:36.430200   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:21:36.430224   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem (1671 bytes)
	I0223 22:21:36.430265   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:21:36.430293   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.430307   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.430319   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem -> /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.430835   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 22:21:36.452666   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 22:21:36.474354   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 22:21:36.496347   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 22:21:36.518192   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:21:36.539742   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:21:36.561567   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:21:36.582936   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:21:36.605667   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /usr/share/ca-certificates/669272.pem (1708 bytes)
	I0223 22:21:36.627349   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:21:36.649138   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem --> /usr/share/ca-certificates/66927.pem (1338 bytes)
	I0223 22:21:36.670645   80620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 22:21:36.685674   80620 ssh_runner.go:195] Run: openssl version
	I0223 22:21:36.690629   80620 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0223 22:21:36.690924   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66927.pem && ln -fs /usr/share/ca-certificates/66927.pem /etc/ssl/certs/66927.pem"
	I0223 22:21:36.699754   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.703759   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.704095   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.704128   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.709182   80620 command_runner.go:130] > 51391683
	I0223 22:21:36.709238   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/66927.pem /etc/ssl/certs/51391683.0"
	I0223 22:21:36.718122   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669272.pem && ln -fs /usr/share/ca-certificates/669272.pem /etc/ssl/certs/669272.pem"
	I0223 22:21:36.726789   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730766   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730841   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730885   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.735795   80620 command_runner.go:130] > 3ec20f2e
	I0223 22:21:36.736176   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/669272.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:21:36.745026   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:21:36.753682   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757609   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757830   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757864   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.762876   80620 command_runner.go:130] > b5213941
	I0223 22:21:36.762930   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:21:36.771746   80620 kubeadm.go:401] StartCluster: {Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP:}
	I0223 22:21:36.771889   80620 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:21:36.795673   80620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 22:21:36.804158   80620 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0223 22:21:36.804177   80620 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0223 22:21:36.804208   80620 command_runner.go:130] > /var/lib/minikube/etcd:
	I0223 22:21:36.804223   80620 command_runner.go:130] > member
	I0223 22:21:36.804253   80620 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 22:21:36.804270   80620 kubeadm.go:633] restartCluster start
	I0223 22:21:36.804326   80620 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 22:21:36.812345   80620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:36.812718   80620 kubeconfig.go:135] verify returned: extract IP: "multinode-773885" does not appear in /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:36.812798   80620 kubeconfig.go:146] "multinode-773885" context is missing from /home/jenkins/minikube-integration/15909-59858/kubeconfig - will repair!
	I0223 22:21:36.813094   80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:36.813506   80620 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:36.813719   80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:21:36.814424   80620 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 22:21:36.814616   80620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 22:21:36.822391   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:36.822434   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:36.832386   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:37.333153   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:37.333231   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:37.344298   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:37.832833   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:37.832931   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:37.843863   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:38.333039   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:38.333157   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:38.344397   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:38.833335   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:38.833418   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:38.844307   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:39.332585   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:39.332660   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:39.343665   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:39.833274   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:39.833358   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:39.844484   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:40.332983   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:40.333065   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:40.344099   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:40.832657   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:40.832750   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:40.843615   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:41.333154   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:41.333245   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:41.344059   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:41.832619   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:41.832703   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:41.843654   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:42.333248   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:42.333328   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:42.344533   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:42.833157   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:42.833256   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:42.843975   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:43.333351   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:43.333418   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:43.344740   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:43.832562   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:43.832672   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:43.843659   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:44.333327   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:44.333407   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:44.344578   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:44.833173   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:44.833245   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:44.844332   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:45.332909   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:45.333037   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:45.344107   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:45.832647   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:45.832732   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:45.843986   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.332538   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.332617   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.343428   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.833367   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.833455   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.844521   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.844541   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.844582   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.854411   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.854446   80620 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 22:21:46.854455   80620 kubeadm.go:1120] stopping kube-system containers ...
	I0223 22:21:46.854520   80620 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:21:46.882631   80620 command_runner.go:130] > a31cf43457e0
	I0223 22:21:46.882655   80620 command_runner.go:130] > b83daa4cdd8d
	I0223 22:21:46.882661   80620 command_runner.go:130] > 75e472928e30
	I0223 22:21:46.882666   80620 command_runner.go:130] > 20f2e353f8d4
	I0223 22:21:46.882674   80620 command_runner.go:130] > f6b2b873cba9
	I0223 22:21:46.882682   80620 command_runner.go:130] > 6becaf5c8640
	I0223 22:21:46.882688   80620 command_runner.go:130] > a2a9a29b5a41
	I0223 22:21:46.882694   80620 command_runner.go:130] > f284ce294fa0
	I0223 22:21:46.882700   80620 command_runner.go:130] > 8d29ee663e61
	I0223 22:21:46.882707   80620 command_runner.go:130] > baad115b76c6
	I0223 22:21:46.882725   80620 command_runner.go:130] > 53723346fe3c
	I0223 22:21:46.882735   80620 command_runner.go:130] > 6a41aad93299
	I0223 22:21:46.882743   80620 command_runner.go:130] > 745d6ec7adf4
	I0223 22:21:46.882750   80620 command_runner.go:130] > 979e703c6176
	I0223 22:21:46.882757   80620 command_runner.go:130] > 3b6e6d975efa
	I0223 22:21:46.882766   80620 command_runner.go:130] > 072b5f08a10f
	I0223 22:21:46.882797   80620 docker.go:456] Stopping containers: [a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f]
	I0223 22:21:46.882868   80620 ssh_runner.go:195] Run: docker stop a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f
	I0223 22:21:46.908823   80620 command_runner.go:130] > a31cf43457e0
	I0223 22:21:46.908844   80620 command_runner.go:130] > b83daa4cdd8d
	I0223 22:21:46.908853   80620 command_runner.go:130] > 75e472928e30
	I0223 22:21:46.908858   80620 command_runner.go:130] > 20f2e353f8d4
	I0223 22:21:46.908865   80620 command_runner.go:130] > f6b2b873cba9
	I0223 22:21:46.908870   80620 command_runner.go:130] > 6becaf5c8640
	I0223 22:21:46.908876   80620 command_runner.go:130] > a2a9a29b5a41
	I0223 22:21:46.909404   80620 command_runner.go:130] > f284ce294fa0
	I0223 22:21:46.909419   80620 command_runner.go:130] > 8d29ee663e61
	I0223 22:21:46.909424   80620 command_runner.go:130] > baad115b76c6
	I0223 22:21:46.909441   80620 command_runner.go:130] > 53723346fe3c
	I0223 22:21:46.909828   80620 command_runner.go:130] > 6a41aad93299
	I0223 22:21:46.909847   80620 command_runner.go:130] > 745d6ec7adf4
	I0223 22:21:46.909853   80620 command_runner.go:130] > 979e703c6176
	I0223 22:21:46.909858   80620 command_runner.go:130] > 3b6e6d975efa
	I0223 22:21:46.909864   80620 command_runner.go:130] > 072b5f08a10f
	I0223 22:21:46.911025   80620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 22:21:46.925825   80620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 22:21:46.933780   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 22:21:46.933807   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 22:21:46.933818   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 22:21:46.933842   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:21:46.934068   80620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:21:46.934127   80620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 22:21:46.942292   80620 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 22:21:46.942311   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.060140   80620 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:21:47.060421   80620 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 22:21:47.060722   80620 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 22:21:47.061266   80620 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 22:21:47.061579   80620 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0223 22:21:47.062097   80620 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0223 22:21:47.062730   80620 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0223 22:21:47.063273   80620 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0223 22:21:47.063668   80620 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0223 22:21:47.064166   80620 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 22:21:47.064500   80620 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 22:21:47.064789   80620 command_runner.go:130] > [certs] Using the existing "sa" key
	I0223 22:21:47.066082   80620 command_runner.go:130] ! W0223 22:21:47.003599    1259 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.066190   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.118462   80620 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:21:47.207705   80620 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:21:47.310176   80620 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:21:47.491530   80620 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:21:47.570853   80620 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:21:47.573364   80620 command_runner.go:130] ! W0223 22:21:47.061082    1265 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.573502   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.637325   80620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:21:47.638644   80620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:21:47.638664   80620 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:21:47.751602   80620 command_runner.go:130] ! W0223 22:21:47.567753    1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.751640   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.811937   80620 command_runner.go:130] ! W0223 22:21:47.761774    1293 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.829349   80620 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:21:47.829375   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:21:47.829384   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:21:47.829392   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:21:47.829573   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.919203   80620 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:21:47.922916   80620 command_runner.go:130] ! W0223 22:21:47.858650    1302 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.923089   80620 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:21:47.923171   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:48.438055   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:48.938524   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:49.437773   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:49.938504   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:50.438625   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:50.455679   80620 command_runner.go:130] > 1675
	I0223 22:21:50.456038   80620 api_server.go:71] duration metric: took 2.532952682s to wait for apiserver process to appear ...
	I0223 22:21:50.456061   80620 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:21:50.456073   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:50.456563   80620 api_server.go:268] stopped: https://192.168.39.240:8443/healthz: Get "https://192.168.39.240:8443/healthz": dial tcp 192.168.39.240:8443: connect: connection refused
	I0223 22:21:50.957285   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:53.851413   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 22:21:53.851440   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 22:21:53.957622   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:53.962959   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:53.962996   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:54.457567   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:54.462593   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:54.462613   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:54.957140   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:54.975573   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:54.975619   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:55.457159   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:55.468052   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:21:55.468134   80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0223 22:21:55.468145   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:55.468159   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:55.468173   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:55.478605   80620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0223 22:21:55.478631   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:55.478639   80620 round_trippers.go:580]     Content-Length: 263
	I0223 22:21:55.478645   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:55 GMT
	I0223 22:21:55.478651   80620 round_trippers.go:580]     Audit-Id: 0e80152b-56d5-4ba7-8d3d-ebf4ef092ec4
	I0223 22:21:55.478656   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:55.478661   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:55.478667   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:55.478677   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:55.478720   80620 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:21:55.478820   80620 api_server.go:140] control plane version: v1.26.1
	I0223 22:21:55.478837   80620 api_server.go:130] duration metric: took 5.022769855s to wait for apiserver health ...
	I0223 22:21:55.478847   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:55.478864   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:55.481215   80620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 22:21:55.482654   80620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:21:55.487827   80620 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:21:55.487850   80620 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0223 22:21:55.487860   80620 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0223 22:21:55.487870   80620 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:21:55.487881   80620 command_runner.go:130] > Access: 2023-02-23 22:21:25.431985633 +0000
	I0223 22:21:55.487897   80620 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0223 22:21:55.487905   80620 command_runner.go:130] > Change: 2023-02-23 22:21:23.668985633 +0000
	I0223 22:21:55.487910   80620 command_runner.go:130] >  Birth: -
	I0223 22:21:55.488315   80620 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:21:55.488335   80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:21:55.519404   80620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:21:56.635297   80620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:21:56.642116   80620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:21:56.645709   80620 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 22:21:56.664280   80620 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 22:21:56.666573   80620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.147136699s)
	I0223 22:21:56.666612   80620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:21:56.666717   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:21:56.666728   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.666739   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.666748   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.670034   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:21:56.670049   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.670056   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.670062   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.670081   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.670087   80620 round_trippers.go:580]     Audit-Id: 03e54a77-0840-4896-9a52-5cdd73109000
	I0223 22:21:56.670100   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.670111   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.671358   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
	I0223 22:21:56.675255   80620 system_pods.go:59] 12 kube-system pods found
	I0223 22:21:56.675279   80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:21:56.675286   80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 22:21:56.675291   80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:21:56.675295   80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:21:56.675316   80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:21:56.675325   80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:21:56.675337   80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 22:21:56.675345   80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:21:56.675349   80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:21:56.675356   80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:21:56.675361   80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 22:21:56.675367   80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:21:56.675372   80620 system_pods.go:74] duration metric: took 8.754325ms to wait for pod list to return data ...
	I0223 22:21:56.675385   80620 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:21:56.675430   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0223 22:21:56.675437   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.675444   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.675451   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.680543   80620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 22:21:56.680557   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.680564   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.680569   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.680577   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.680582   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.680589   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.680597   80620 round_trippers.go:580]     Audit-Id: e86d112e-250e-4963-a6fb-b8fd3c902f59
	I0223 22:21:56.681128   80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16319 chars]
	I0223 22:21:56.681878   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681909   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681918   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681922   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681926   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681932   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681938   80620 node_conditions.go:105] duration metric: took 6.549163ms to run NodePressure ...
	I0223 22:21:56.681958   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:56.825426   80620 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 22:21:56.885114   80620 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 22:21:56.886787   80620 command_runner.go:130] ! W0223 22:21:56.690228    2212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:56.886832   80620 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 22:21:56.886942   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0223 22:21:56.886954   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.886965   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.886975   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.889503   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.889525   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.889536   80620 round_trippers.go:580]     Audit-Id: a9179ace-0f8b-41d7-acc9-15a5468f5431
	I0223 22:21:56.889545   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.889552   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.889561   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.889569   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.889582   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.890569   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29273 chars]
	I0223 22:21:56.891994   80620 kubeadm.go:784] kubelet initialised
	I0223 22:21:56.892020   80620 kubeadm.go:785] duration metric: took 5.174392ms waiting for restarted kubelet to initialise ...
	I0223 22:21:56.892029   80620 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:21:56.892094   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:21:56.892105   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.892115   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.892126   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.898216   80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 22:21:56.898231   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.898240   80620 round_trippers.go:580]     Audit-Id: 0cbc9df8-5ddc-4405-a649-09747f9c7e5c
	I0223 22:21:56.898250   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.898260   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.898268   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.898280   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.898290   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.899125   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
	I0223 22:21:56.901600   80620 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.901668   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:21:56.901680   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.901690   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.901697   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.906528   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:21:56.906543   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.906552   80620 round_trippers.go:580]     Audit-Id: c55b1693-f442-4306-a674-87f938885743
	I0223 22:21:56.906561   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.906571   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.906580   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.906589   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.906602   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.906875   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:21:56.907276   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.907287   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.907294   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.907312   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.916593   80620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0223 22:21:56.916608   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.916616   80620 round_trippers.go:580]     Audit-Id: 3b9497a6-fa4c-472e-b004-b0b6906e7a7f
	I0223 22:21:56.916625   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.916634   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.916644   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.916652   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.916662   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.916802   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.917117   80620 pod_ready.go:97] node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.917132   80620 pod_ready.go:81] duration metric: took 15.512217ms waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.917139   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.917145   80620 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.917197   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
	I0223 22:21:56.917206   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.917213   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.917219   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.919079   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.919091   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.919097   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.919103   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.919108   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.919114   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.919120   80620 round_trippers.go:580]     Audit-Id: 143d00d2-5e6b-44b2-a517-c658e2dc5a9f
	I0223 22:21:56.919129   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.919346   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6289 chars]
	I0223 22:21:56.919779   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.919793   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.919802   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.919808   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.921391   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.921406   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.921413   80620 round_trippers.go:580]     Audit-Id: 9f5eac9e-078a-4143-9d6d-1b1de0a3102a
	I0223 22:21:56.921423   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.921431   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.921440   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.921450   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.921460   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.921618   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.921957   80620 pod_ready.go:97] node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.921972   80620 pod_ready.go:81] duration metric: took 4.821003ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.921981   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.921998   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.922055   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
	I0223 22:21:56.922065   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.922076   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.922089   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.925010   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.925024   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.925033   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.925043   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.925052   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.925061   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.925070   80620 round_trippers.go:580]     Audit-Id: 422d48f0-48d6-4c16-8b22-40f26357fc34
	I0223 22:21:56.925075   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.925261   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"282","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0223 22:21:56.925639   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.925652   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.925659   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.925666   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.927337   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.927356   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.927365   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.927373   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.927382   80620 round_trippers.go:580]     Audit-Id: 020b9a46-ef43-4607-90e4-5d3e9e7d1a08
	I0223 22:21:56.927392   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.927401   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.927413   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.927579   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.927921   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.927940   80620 pod_ready.go:81] duration metric: took 5.928725ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.927950   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.927957   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.928048   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
	I0223 22:21:56.928062   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.928072   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.928082   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.930936   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.930950   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.930956   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.930961   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.930968   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.930982   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.930995   80620 round_trippers.go:580]     Audit-Id: 00aa01ac-5a84-4085-b3b5-f5f6d06fbe47
	I0223 22:21:56.931005   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.931218   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"739","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7424 chars]
	I0223 22:21:57.067070   80620 request.go:622] Waited for 135.338555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.067135   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.067145   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.067163   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.067176   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.070119   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.070137   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.070143   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.070149   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.070155   80620 round_trippers.go:580]     Audit-Id: 5d3402dd-3874-4131-9278-561b1ef77762
	I0223 22:21:57.070161   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.070167   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.070178   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.070297   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:57.070668   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.070691   80620 pod_ready.go:81] duration metric: took 142.727116ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:57.070704   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.070713   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.267166   80620 request.go:622] Waited for 196.388978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:21:57.267229   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:21:57.267239   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.267252   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.267264   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.269968   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.269991   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.270000   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.270012   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.270084   80620 round_trippers.go:580]     Audit-Id: 27049171-e30c-4ab9-a6ed-77da398a4856
	I0223 22:21:57.270104   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.270113   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.270123   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.270261   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0223 22:21:57.467146   80620 request.go:622] Waited for 196.375195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:21:57.467201   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:21:57.467207   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.467216   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.467235   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.469655   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.469680   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.469690   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.469716   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.469727   80620 round_trippers.go:580]     Audit-Id: d420f22f-77bb-4122-826c-40660cb2d6fb
	I0223 22:21:57.469734   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.469741   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.469749   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.469921   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0223 22:21:57.470230   80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
	I0223 22:21:57.470242   80620 pod_ready.go:81] duration metric: took 399.521519ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.470250   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.667697   80620 request.go:622] Waited for 197.385632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:21:57.667766   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:21:57.667771   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.667778   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.667785   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.670278   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.670298   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.670308   80620 round_trippers.go:580]     Audit-Id: 0128213a-339a-470c-989d-e7b486abebe1
	I0223 22:21:57.670316   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.670324   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.670333   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.670342   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.670351   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.670879   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"377","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 22:21:57.867695   80620 request.go:622] Waited for 196.388162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.867765   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.867770   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.867778   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.867784   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.870409   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.870431   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.870442   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.870452   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.870460   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.870466   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.870474   80620 round_trippers.go:580]     Audit-Id: a53d6f4e-2730-4846-9147-87d2b5b1bc56
	I0223 22:21:57.870483   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.870627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:57.870935   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.870951   80620 pod_ready.go:81] duration metric: took 400.694245ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:57.870962   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.870970   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.067390   80620 request.go:622] Waited for 196.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:21:58.067527   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:21:58.067575   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.067593   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.067604   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.071162   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:21:58.071181   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.071191   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.071199   80620 round_trippers.go:580]     Audit-Id: 49f82db0-63aa-4950-9457-03eeb73d1c6f
	I0223 22:21:58.071207   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.071215   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.071223   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.071231   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.071517   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0223 22:21:58.267044   80620 request.go:622] Waited for 195.100843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:21:58.267131   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:21:58.267138   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.267150   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.267161   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.269786   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.269805   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.269812   80620 round_trippers.go:580]     Audit-Id: 28398178-6b4f-4ced-bd50-76b0a4e432c0
	I0223 22:21:58.269818   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.269823   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.269828   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.269833   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.269846   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.270022   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
	I0223 22:21:58.270353   80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
	I0223 22:21:58.270367   80620 pod_ready.go:81] duration metric: took 399.384993ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.270378   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.467272   80620 request.go:622] Waited for 196.812846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:21:58.467358   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:21:58.467365   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.467376   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.467390   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.470141   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.470169   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.470179   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.470188   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.470195   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.470204   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.470213   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.470221   80620 round_trippers.go:580]     Audit-Id: e5044b8f-aa40-4729-93fe-c25c71ca551c
	I0223 22:21:58.470349   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"742","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5136 chars]
	I0223 22:21:58.667199   80620 request.go:622] Waited for 196.342723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.667264   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.667275   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.667288   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.667318   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.669825   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.669849   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.669860   80620 round_trippers.go:580]     Audit-Id: 8c1fc862-a3d1-4b08-b8c2-f41fa6fd3cd6
	I0223 22:21:58.669869   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.669877   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.669885   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.669899   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.669910   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.670129   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:58.670496   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:58.670517   80620 pod_ready.go:81] duration metric: took 400.130245ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:58.670528   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:58.670539   80620 pod_ready.go:38] duration metric: took 1.778499138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:21:58.670563   80620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 22:21:58.684600   80620 command_runner.go:130] > -16
	I0223 22:21:58.684633   80620 ops.go:34] apiserver oom_adj: -16
	I0223 22:21:58.684642   80620 kubeadm.go:637] restartCluster took 21.880365731s
	I0223 22:21:58.684651   80620 kubeadm.go:403] StartCluster complete in 21.912911073s
	I0223 22:21:58.684672   80620 settings.go:142] acquiring lock: {Name:mk906211444ec0c60982da29f94c92fb57d72ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:58.684774   80620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:58.685563   80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:58.685892   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 22:21:58.686005   80620 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0223 22:21:58.686136   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:58.686171   80620 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:58.687964   80620 out.go:177] * Enabled addons: 
	I0223 22:21:58.686508   80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:21:58.689318   80620 addons.go:492] enable addons completed in 3.316295ms: enabled=[]
	I0223 22:21:58.689636   80620 round_trippers.go:463] GET https://192.168.39.240:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:21:58.689653   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.689665   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.689674   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.692405   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.692425   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.692435   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.692448   80620 round_trippers.go:580]     Audit-Id: 2916b551-1504-4ee6-8f0b-8bb9b49c72fe
	I0223 22:21:58.692457   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.692474   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.692486   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.692499   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.692512   80620 round_trippers.go:580]     Content-Length: 291
	I0223 22:21:58.692541   80620 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88095e59-4c47-4f2e-9af0-397e7cc508de","resourceVersion":"743","creationTimestamp":"2023-02-23T22:17:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:21:58.692706   80620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773885" context rescaled to 1 replicas
	I0223 22:21:58.692739   80620 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:21:58.694468   80620 out.go:177] * Verifying Kubernetes components...
	I0223 22:21:58.696081   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:21:58.815357   80620 command_runner.go:130] > apiVersion: v1
	I0223 22:21:58.815388   80620 command_runner.go:130] > data:
	I0223 22:21:58.815395   80620 command_runner.go:130] >   Corefile: |
	I0223 22:21:58.815401   80620 command_runner.go:130] >     .:53 {
	I0223 22:21:58.815406   80620 command_runner.go:130] >         log
	I0223 22:21:58.815414   80620 command_runner.go:130] >         errors
	I0223 22:21:58.815423   80620 command_runner.go:130] >         health {
	I0223 22:21:58.815430   80620 command_runner.go:130] >            lameduck 5s
	I0223 22:21:58.815435   80620 command_runner.go:130] >         }
	I0223 22:21:58.815443   80620 command_runner.go:130] >         ready
	I0223 22:21:58.815455   80620 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 22:21:58.815461   80620 command_runner.go:130] >            pods insecure
	I0223 22:21:58.815470   80620 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 22:21:58.815479   80620 command_runner.go:130] >            ttl 30
	I0223 22:21:58.815485   80620 command_runner.go:130] >         }
	I0223 22:21:58.815495   80620 command_runner.go:130] >         prometheus :9153
	I0223 22:21:58.815501   80620 command_runner.go:130] >         hosts {
	I0223 22:21:58.815510   80620 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0223 22:21:58.815517   80620 command_runner.go:130] >            fallthrough
	I0223 22:21:58.815526   80620 command_runner.go:130] >         }
	I0223 22:21:58.815537   80620 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 22:21:58.815545   80620 command_runner.go:130] >            max_concurrent 1000
	I0223 22:21:58.815553   80620 command_runner.go:130] >         }
	I0223 22:21:58.815563   80620 command_runner.go:130] >         cache 30
	I0223 22:21:58.815574   80620 command_runner.go:130] >         loop
	I0223 22:21:58.815583   80620 command_runner.go:130] >         reload
	I0223 22:21:58.815595   80620 command_runner.go:130] >         loadbalance
	I0223 22:21:58.815605   80620 command_runner.go:130] >     }
	I0223 22:21:58.815614   80620 command_runner.go:130] > kind: ConfigMap
	I0223 22:21:58.815623   80620 command_runner.go:130] > metadata:
	I0223 22:21:58.815631   80620 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:17:37Z"
	I0223 22:21:58.815641   80620 command_runner.go:130] >   name: coredns
	I0223 22:21:58.815651   80620 command_runner.go:130] >   namespace: kube-system
	I0223 22:21:58.815660   80620 command_runner.go:130] >   resourceVersion: "360"
	I0223 22:21:58.815671   80620 command_runner.go:130] >   uid: 79632023-f720-4e05-a063-411c24789887
	I0223 22:21:58.818640   80620 node_ready.go:35] waiting up to 6m0s for node "multinode-773885" to be "Ready" ...
	I0223 22:21:58.818784   80620 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 22:21:58.866997   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.867022   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.867036   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.867046   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.869514   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.869542   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.869553   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.869562   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.869568   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.869573   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.869579   80620 round_trippers.go:580]     Audit-Id: ef8ca951-03a3-4673-b3b0-d6e949e3aba1
	I0223 22:21:58.869586   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.869696   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:59.370801   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:59.370828   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:59.370840   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:59.370850   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:59.373237   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:59.373263   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:59.373275   80620 round_trippers.go:580]     Audit-Id: cc5c5f53-65a1-48f1-8d30-2983a96a1517
	I0223 22:21:59.373284   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:59.373292   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:59.373301   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:59.373310   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:59.373320   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:59 GMT
	I0223 22:21:59.373432   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:59.871104   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:59.871130   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:59.871142   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:59.871152   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:59.873824   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:59.873849   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:59.873860   80620 round_trippers.go:580]     Audit-Id: a0c12052-13ba-4532-b2cb-ef0712468e2c
	I0223 22:21:59.873868   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:59.873877   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:59.873890   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:59.873898   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:59.873910   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:59 GMT
	I0223 22:21:59.874344   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.371108   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:00.371138   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:00.371150   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:00.371160   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:00.373796   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:00.373818   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:00.373826   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:00.373832   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:00.373837   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:00.373843   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:00 GMT
	I0223 22:22:00.373849   80620 round_trippers.go:580]     Audit-Id: 6d76f1af-c5ab-44d4-ac95-d4a732c54af0
	I0223 22:22:00.373861   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:00.374155   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.870897   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:00.870933   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:00.870942   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:00.870951   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:00.873427   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:00.873451   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:00.873462   80620 round_trippers.go:580]     Audit-Id: 494f6db1-2d29-4a14-be25-f5115f464c6c
	I0223 22:22:00.873471   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:00.873485   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:00.873495   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:00.873504   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:00.873512   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:00 GMT
	I0223 22:22:00.873654   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.874130   80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
	I0223 22:22:01.370246   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:01.370268   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:01.370279   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:01.370286   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:01.372742   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:01.372768   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:01.372779   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:01.372787   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:01.372796   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:01.372808   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:01 GMT
	I0223 22:22:01.372816   80620 round_trippers.go:580]     Audit-Id: d657d94b-1177-4e47-9c6a-10517add9c29
	I0223 22:22:01.372827   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:01.372974   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:01.870635   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:01.870664   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:01.870672   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:01.870679   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:01.873350   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:01.873373   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:01.873386   80620 round_trippers.go:580]     Audit-Id: 3aae1eee-a094-424f-bbd3-1cc775206a05
	I0223 22:22:01.873395   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:01.873403   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:01.873410   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:01.873419   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:01.873428   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:01 GMT
	I0223 22:22:01.873701   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.370356   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:02.370378   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:02.370386   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:02.370392   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:02.373961   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:02.373983   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:02.373992   80620 round_trippers.go:580]     Audit-Id: 2d8ae255-30e7-495f-82a8-f977058510be
	I0223 22:22:02.374000   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:02.374008   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:02.374018   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:02.374028   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:02.374041   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:02 GMT
	I0223 22:22:02.374362   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.871107   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:02.871133   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:02.871148   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:02.871157   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:02.873653   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:02.873672   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:02.873680   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:02.873686   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:02.873691   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:02 GMT
	I0223 22:22:02.873697   80620 round_trippers.go:580]     Audit-Id: 88e3a2a0-3a44-456c-a122-9443f9691153
	I0223 22:22:02.873706   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:02.873715   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:02.874022   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.874437   80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
	I0223 22:22:03.370842   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:03.370869   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:03.370886   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:03.370894   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:03.372889   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:03.372909   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:03.372916   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:03.372922   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:03 GMT
	I0223 22:22:03.372928   80620 round_trippers.go:580]     Audit-Id: 553e23aa-d7b4-4f46-b968-491b3c19b7a9
	I0223 22:22:03.372934   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:03.372942   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:03.372954   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:03.373055   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:03.870742   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:03.870764   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:03.870773   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:03.870779   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:03.873449   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:03.873469   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:03.873476   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:03.873482   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:03.873487   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:03 GMT
	I0223 22:22:03.873493   80620 round_trippers.go:580]     Audit-Id: d10ccbbb-11df-43ab-9526-c648f4eb57ab
	I0223 22:22:03.873499   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:03.873504   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:03.873699   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:04.370303   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.370324   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.370332   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.370339   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.372813   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.372839   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.372851   80620 round_trippers.go:580]     Audit-Id: bdad9e22-9644-4e1c-8f6c-ae6fc5d4caf1
	I0223 22:22:04.372861   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.372870   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.372879   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.372893   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.372902   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.373649   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:04.870293   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.870319   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.870327   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.870333   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.873111   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.873137   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.873148   80620 round_trippers.go:580]     Audit-Id: 356034ea-3c99-4375-a746-070c2cc9db4c
	I0223 22:22:04.873157   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.873164   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.873172   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.873182   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.873192   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.873417   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:04.873740   80620 node_ready.go:49] node "multinode-773885" has status "Ready":"True"
	I0223 22:22:04.873759   80620 node_ready.go:38] duration metric: took 6.055088164s waiting for node "multinode-773885" to be "Ready" ...
	I0223 22:22:04.873768   80620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:22:04.873821   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:04.873828   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.873836   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.873842   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.877171   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:04.877190   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.877199   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.877209   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.877217   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.877225   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.877234   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.877242   80620 round_trippers.go:580]     Audit-Id: ea2e3ce7-5ec8-4de8-affe-00217b9f0f75
	I0223 22:22:04.878185   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"788"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83657 chars]
	I0223 22:22:04.880661   80620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:04.880721   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:04.880729   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.880736   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.880743   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.882620   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:04.882637   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.882643   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.882649   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.882654   80620 round_trippers.go:580]     Audit-Id: b8c34b52-e089-4d20-abac-792cd26a154e
	I0223 22:22:04.882660   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.882665   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.882671   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.882780   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:04.883130   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.883141   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.883148   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.883154   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.885545   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.885559   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.885566   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.885571   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.885577   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.885582   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.885590   80620 round_trippers.go:580]     Audit-Id: a935859f-b8a0-4ddc-8ffe-b88f374b4617
	I0223 22:22:04.885597   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.885668   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:05.386735   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:05.386762   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.386775   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.386785   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.389024   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:05.389044   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.389055   80620 round_trippers.go:580]     Audit-Id: 5162732a-6a2d-4976-bd1a-d7a30dbd6874
	I0223 22:22:05.389063   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.389070   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.389082   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.389095   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.389103   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.389223   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:05.389693   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:05.389706   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.389713   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.389722   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.391445   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:05.391462   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.391469   80620 round_trippers.go:580]     Audit-Id: 152ffe10-665f-45a2-8a81-8746544ba57e
	I0223 22:22:05.391475   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.391482   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.391491   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.391501   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.391511   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.391627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:05.886225   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:05.886248   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.886257   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.886264   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.888353   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:05.888389   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.888399   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.888408   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.888417   80620 round_trippers.go:580]     Audit-Id: cc5f0143-2508-446f-907a-56ab533f7430
	I0223 22:22:05.888426   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.888438   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.888446   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.889024   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:05.889458   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:05.889469   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.889476   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.889484   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.891242   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:05.891257   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.891263   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.891269   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.891275   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.891283   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.891293   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.891319   80620 round_trippers.go:580]     Audit-Id: ee3b00fc-914b-4eba-8a45-e4597d8f6d25
	I0223 22:22:05.891627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.386281   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:06.386303   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.386311   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.386326   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.388974   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:06.388992   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.388999   80620 round_trippers.go:580]     Audit-Id: 220c9abc-71ea-4bf1-984a-8b6e023377f1
	I0223 22:22:06.389014   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.389026   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.389038   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.389046   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.389052   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.389842   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:06.390308   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:06.390321   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.390328   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.390337   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.391935   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:06.391953   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.391962   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.391970   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.391980   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.391989   80620 round_trippers.go:580]     Audit-Id: 7685b789-c707-4d17-88af-7145585bce78
	I0223 22:22:06.391998   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.392010   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.392362   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.886127   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:06.886150   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.886159   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.886165   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.889975   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:06.890001   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.890013   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.890023   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.890035   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.890048   80620 round_trippers.go:580]     Audit-Id: 87848966-24d5-45b3-a7aa-56f65410f508
	I0223 22:22:06.890057   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.890070   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.890267   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:06.890721   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:06.890734   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.890741   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.890747   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.895655   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:06.895674   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.895684   80620 round_trippers.go:580]     Audit-Id: f054bb7d-1199-4b8d-b3f0-4c0274f1d63d
	I0223 22:22:06.895693   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.895702   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.895713   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.895724   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.895736   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.896139   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.896420   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:07.386841   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:07.386862   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.386871   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.386878   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.389998   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:07.390025   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.390036   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.390046   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.390054   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.390062   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.390070   80620 round_trippers.go:580]     Audit-Id: d6b7ea92-112f-499d-a61b-86d8245e8558
	I0223 22:22:07.390078   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.390244   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:07.390679   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:07.390690   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.390698   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.390704   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.392927   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:07.392948   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.392958   80620 round_trippers.go:580]     Audit-Id: e7498617-1172-42fd-b07a-d2d628e52a21
	I0223 22:22:07.392969   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.392988   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.393002   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.393011   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.393022   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.393607   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:07.886231   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:07.886254   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.886277   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.886284   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.889328   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:07.889351   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.889359   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.889366   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.889371   80620 round_trippers.go:580]     Audit-Id: 996a8d26-ab61-4eb1-a206-c0fb32514e06
	I0223 22:22:07.889377   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.889382   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.889388   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.889970   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:07.890413   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:07.890425   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.890432   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.890439   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.897920   80620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0223 22:22:07.897934   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.897941   80620 round_trippers.go:580]     Audit-Id: 4221b7db-ff10-4443-aed5-78c6f7b9296c
	I0223 22:22:07.897947   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.897953   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.897958   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.897966   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.897972   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.898379   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:08.386191   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:08.386213   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.386224   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.386234   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.388618   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.388637   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.388644   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.388652   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.388660   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.388668   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.388689   80620 round_trippers.go:580]     Audit-Id: 9fd3f354-aaea-4470-b0a9-a62bb9cf4b81
	I0223 22:22:08.388695   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.389016   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:08.389462   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:08.389474   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.389484   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.389493   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.391347   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:08.391366   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.391376   80620 round_trippers.go:580]     Audit-Id: d2b922bc-cc07-4d6a-a919-5b81247f7675
	I0223 22:22:08.391385   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.391396   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.391405   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.391414   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.391419   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.391692   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:08.886358   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:08.886387   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.886397   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.886403   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.889174   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.889200   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.889209   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.889215   80620 round_trippers.go:580]     Audit-Id: 7d35bf13-e46b-4b70-b379-eef2287d1352
	I0223 22:22:08.889220   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.889226   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.889231   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.889236   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.889437   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:08.889910   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:08.889923   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.889931   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.889937   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.892893   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.892908   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.892914   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.892919   80620 round_trippers.go:580]     Audit-Id: c156c99d-e130-4f55-b4e3-14616a7ba70f
	I0223 22:22:08.892927   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.892936   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.892945   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.892956   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.893597   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:09.386240   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:09.386263   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.386272   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.386278   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.388959   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.388983   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.388991   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.388997   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.389002   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.389007   80620 round_trippers.go:580]     Audit-Id: b1b9610c-e081-4bbb-837e-8be581f68475
	I0223 22:22:09.389013   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.389018   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.389296   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:09.389849   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:09.389877   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.389888   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.389895   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.391871   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:09.391888   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.391895   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.391900   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.391906   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.391911   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.391916   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.391930   80620 round_trippers.go:580]     Audit-Id: 002294de-1a26-4570-886e-0a7800195800
	I0223 22:22:09.392074   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:09.392445   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:09.886775   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:09.886796   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.886805   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.886812   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.889680   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.889703   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.889710   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.889716   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.889722   80620 round_trippers.go:580]     Audit-Id: 3a94f330-f28f-46c4-a648-51998b06aed1
	I0223 22:22:09.889730   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.889740   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.889749   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.889960   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:09.890412   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:09.890426   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.890433   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.890439   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.893112   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.893124   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.893131   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.893136   80620 round_trippers.go:580]     Audit-Id: f1b19073-36ac-4a4c-b6c5-aa4b69ec1776
	I0223 22:22:09.893141   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.893148   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.893156   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.893165   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.893436   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:10.386076   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:10.386100   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.386109   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.386115   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.388462   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:10.388484   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.388491   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.388497   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.388502   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.388508   80620 round_trippers.go:580]     Audit-Id: b0c0f970-513c-4958-8f0f-9012dbfa36d5
	I0223 22:22:10.388513   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.388518   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.388755   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:10.389295   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:10.389312   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.389323   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.389333   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.391529   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:10.391550   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.391560   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.391568   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.391574   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.391582   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.391587   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.391593   80620 round_trippers.go:580]     Audit-Id: 10261026-5803-485c-834a-bf21f0cb79e3
	I0223 22:22:10.391676   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:10.886276   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:10.886298   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.886310   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.886319   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.890190   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:10.890215   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.890222   80620 round_trippers.go:580]     Audit-Id: b6386ff9-de93-4709-b3ef-d903d0d5a9cc
	I0223 22:22:10.890228   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.890234   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.890239   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.890245   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.890251   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.890402   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0223 22:22:10.890869   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:10.890883   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.890893   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.890902   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.895016   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:10.895035   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.895046   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.895055   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.895064   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.895073   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.895080   80620 round_trippers.go:580]     Audit-Id: 2e664d84-586c-4ab6-94bc-ba77835a654d
	I0223 22:22:10.895085   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.895436   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.386154   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:11.386182   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.386193   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.386202   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.388774   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.388795   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.388805   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.388814   80620 round_trippers.go:580]     Audit-Id: 0b53d934-8f77-4a2f-bbe6-92be4d3d5c17
	I0223 22:22:11.388822   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.388831   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.388848   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.388858   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.389048   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0223 22:22:11.389509   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.389522   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.389532   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.389541   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.391436   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:11.391458   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.391475   80620 round_trippers.go:580]     Audit-Id: f0d5469c-1828-43e0-99ac-880d59c5ca18
	I0223 22:22:11.391486   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.391496   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.391502   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.391508   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.391514   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.392144   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.392489   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:11.886705   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:11.886728   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.886740   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.886747   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.897949   80620 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0223 22:22:11.897972   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.897979   80620 round_trippers.go:580]     Audit-Id: ee3fad82-cb14-466d-be80-d787cdfe18c6
	I0223 22:22:11.897988   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.897996   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.898005   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.898014   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.898023   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.898203   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0223 22:22:11.898695   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.898709   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.898716   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.898722   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.901522   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.901537   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.901546   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.901555   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.901565   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.901574   80620 round_trippers.go:580]     Audit-Id: 67ab3f98-4824-4d37-9baa-d6fde6241cd3
	I0223 22:22:11.901583   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.901592   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.901884   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.902261   80620 pod_ready.go:92] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.902281   80620 pod_ready.go:81] duration metric: took 7.021599209s waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.902292   80620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.902345   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
	I0223 22:22:11.902362   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.902374   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.902387   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.905539   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:11.905555   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.905564   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.905573   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.905584   80620 round_trippers.go:580]     Audit-Id: b11ef536-b4c5-482e-aa7c-76d59636d5d2
	I0223 22:22:11.905592   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.905600   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.905608   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.906366   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"802","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6065 chars]
	I0223 22:22:11.906856   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.906876   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.906892   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.906903   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.908814   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:11.908827   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.908833   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.908838   80620 round_trippers.go:580]     Audit-Id: afa24933-99a3-4732-ab8c-89f796285545
	I0223 22:22:11.908844   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.908849   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.908860   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.908868   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.909140   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.909495   80620 pod_ready.go:92] pod "etcd-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.909509   80620 pod_ready.go:81] duration metric: took 7.209083ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.909528   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.909582   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
	I0223 22:22:11.909592   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.909603   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.909616   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.911700   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.911720   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.911729   80620 round_trippers.go:580]     Audit-Id: 779ea438-bd06-40b6-ba45-805cc766e96d
	I0223 22:22:11.911737   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.911745   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.911754   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.911762   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.911772   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.911987   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"793","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7599 chars]
	I0223 22:22:11.912445   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.912459   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.912475   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.912485   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.914590   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.914610   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.914619   80620 round_trippers.go:580]     Audit-Id: 05b9d526-86d7-43a1-a29b-8b19eb1394d1
	I0223 22:22:11.914628   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.914637   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.914659   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.914670   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.914685   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.914841   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.915184   80620 pod_ready.go:92] pod "kube-apiserver-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.915198   80620 pod_ready.go:81] duration metric: took 5.656927ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.915207   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.915261   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
	I0223 22:22:11.915271   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.915282   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.915294   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.917370   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.917390   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.917400   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.917407   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.917416   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.917424   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.917434   80620 round_trippers.go:580]     Audit-Id: 1c6ec0cd-a712-46c0-9127-fc5aaaf54dca
	I0223 22:22:11.917444   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.917666   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"825","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7162 chars]
	I0223 22:22:11.918056   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.918067   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.918078   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.918090   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.920329   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.920349   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.920359   80620 round_trippers.go:580]     Audit-Id: 4abce7c0-9628-4d94-8005-2a2dfc23a6e7
	I0223 22:22:11.920367   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.920377   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.920386   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.920394   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.920410   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.921292   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.921655   80620 pod_ready.go:92] pod "kube-controller-manager-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.921672   80620 pod_ready.go:81] duration metric: took 6.456858ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.921682   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.921744   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:22:11.921759   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.921770   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.921788   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.923979   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.923999   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.924008   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.924016   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.924024   80620 round_trippers.go:580]     Audit-Id: 0efbb785-cf58-48c7-81ba-79e7df1fffe6
	I0223 22:22:11.924037   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.924045   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.924054   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.924324   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0223 22:22:11.924642   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:22:11.924651   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.924659   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.924668   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.927145   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.927164   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.927174   80620 round_trippers.go:580]     Audit-Id: d525fadc-555c-4d29-8ba1-8f98e144287a
	I0223 22:22:11.927190   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.927201   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.927209   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.927221   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.927230   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.927662   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0223 22:22:11.927907   80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.927917   80620 pod_ready.go:81] duration metric: took 6.229355ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.927924   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.087372   80620 request.go:622] Waited for 159.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:22:12.087472   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:22:12.087484   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.087494   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.087506   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.090953   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.090975   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.090982   80620 round_trippers.go:580]     Audit-Id: d476c971-82f9-4e13-bf24-ac1d0a7e0132
	I0223 22:22:12.090988   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.091000   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.091015   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.091023   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.091034   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.091257   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"751","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I0223 22:22:12.287106   80620 request.go:622] Waited for 195.345935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:12.287171   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:12.287176   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.287184   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.287190   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.290450   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.290482   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.290493   80620 round_trippers.go:580]     Audit-Id: 293be0f3-4481-47c8-8397-f5bcd5d19b91
	I0223 22:22:12.290503   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.290511   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.290527   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.290541   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.290550   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.290685   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:12.290991   80620 pod_ready.go:92] pod "kube-proxy-mdjks" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:12.291002   80620 pod_ready.go:81] duration metric: took 363.073923ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.291011   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.487380   80620 request.go:622] Waited for 196.297867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:22:12.487451   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:22:12.487455   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.487463   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.487470   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.490351   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:12.490369   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.490376   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.490382   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.490390   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.490396   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.490402   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.490408   80620 round_trippers.go:580]     Audit-Id: 3101849d-f3a0-4ede-99b6-2a380cea5ba6
	I0223 22:22:12.490636   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0223 22:22:12.687374   80620 request.go:622] Waited for 196.32053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:22:12.687452   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:22:12.687458   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.687466   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.687472   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.690923   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.690945   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.690952   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.690958   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.690963   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.690969   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.690975   80620 round_trippers.go:580]     Audit-Id: f8604e33-edeb-42ae-8e19-5e27a6bd8d7d
	I0223 22:22:12.690980   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.693472   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
	I0223 22:22:12.693842   80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:12.693857   80620 pod_ready.go:81] duration metric: took 402.838971ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.693868   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.886856   80620 request.go:622] Waited for 192.90851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:22:12.886917   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:22:12.886932   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.886943   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.886952   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.893080   80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 22:22:12.893102   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.893109   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.893115   80620 round_trippers.go:580]     Audit-Id: 854e2fd9-4c25-4b2f-bc59-61d21fabfb74
	I0223 22:22:12.893120   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.893125   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.893131   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.893136   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.893332   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"786","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4892 chars]
	I0223 22:22:13.087065   80620 request.go:622] Waited for 193.332526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:13.087127   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:13.087133   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.087143   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.087153   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.091144   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:13.091162   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.091169   80620 round_trippers.go:580]     Audit-Id: bf568af1-d7fc-4da0-9559-42a27fc0cef3
	I0223 22:22:13.091175   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.091181   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.091186   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.091198   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.091210   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.091630   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:13.091948   80620 pod_ready.go:92] pod "kube-scheduler-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:13.091980   80620 pod_ready.go:81] duration metric: took 398.085634ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:13.091998   80620 pod_ready.go:38] duration metric: took 8.218220101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:22:13.092020   80620 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:22:13.092066   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:22:13.104775   80620 command_runner.go:130] > 1675
	I0223 22:22:13.104818   80620 api_server.go:71] duration metric: took 14.412044719s to wait for apiserver process to appear ...
	I0223 22:22:13.104835   80620 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:22:13.104847   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:22:13.110111   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:22:13.110176   80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0223 22:22:13.110187   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.110206   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.110217   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.110872   80620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0223 22:22:13.110888   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.110895   80620 round_trippers.go:580]     Audit-Id: 4f7ff6ce-bed0-47c2-918d-6dd15db9ce31
	I0223 22:22:13.110901   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.110906   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.110911   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.110918   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.110923   80620 round_trippers.go:580]     Content-Length: 263
	I0223 22:22:13.110930   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.110950   80620 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:22:13.111007   80620 api_server.go:140] control plane version: v1.26.1
	I0223 22:22:13.111018   80620 api_server.go:130] duration metric: took 6.177354ms to wait for apiserver health ...
	I0223 22:22:13.111024   80620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:22:13.287730   80620 request.go:622] Waited for 176.607463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.287780   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.287784   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.287794   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.287804   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.292061   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:13.292080   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.292087   80620 round_trippers.go:580]     Audit-Id: 8f903081-07eb-4386-b54e-2c988265836f
	I0223 22:22:13.292096   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.292104   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.292110   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.292116   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.292121   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.294183   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
	I0223 22:22:13.296686   80620 system_pods.go:59] 12 kube-system pods found
	I0223 22:22:13.296706   80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:22:13.296711   80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
	I0223 22:22:13.296715   80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:22:13.296719   80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:22:13.296723   80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:22:13.296727   80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:22:13.296731   80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
	I0223 22:22:13.296737   80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:22:13.296741   80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:22:13.296745   80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:22:13.296750   80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
	I0223 22:22:13.296754   80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:22:13.296759   80620 system_pods.go:74] duration metric: took 185.729884ms to wait for pod list to return data ...
	I0223 22:22:13.296768   80620 default_sa.go:34] waiting for default service account to be created ...
	I0223 22:22:13.487059   80620 request.go:622] Waited for 190.213748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:22:13.487142   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:22:13.487151   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.487163   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.487179   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.490660   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:13.490686   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.490698   80620 round_trippers.go:580]     Content-Length: 261
	I0223 22:22:13.490707   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.490715   80620 round_trippers.go:580]     Audit-Id: b33f914f-7659-4fc8-8f76-26f7e677ba77
	I0223 22:22:13.490724   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.490733   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.490746   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.490755   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.490784   80620 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"860"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"62ac0740-2090-4217-a812-0d7ea88a967e","resourceVersion":"301","creationTimestamp":"2023-02-23T22:17:49Z"}}]}
	I0223 22:22:13.491028   80620 default_sa.go:45] found service account: "default"
	I0223 22:22:13.491048   80620 default_sa.go:55] duration metric: took 194.273065ms for default service account to be created ...
	I0223 22:22:13.491059   80620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 22:22:13.687553   80620 request.go:622] Waited for 196.395892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.687624   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.687630   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.687642   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.687659   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.691923   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:13.691949   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.691960   80620 round_trippers.go:580]     Audit-Id: b99f1d26-3de6-4548-9948-e1ef63d9e02a
	I0223 22:22:13.691969   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.691980   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.691988   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.691997   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.692005   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.693522   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
	I0223 22:22:13.695955   80620 system_pods.go:86] 12 kube-system pods found
	I0223 22:22:13.695978   80620 system_pods.go:89] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:22:13.695985   80620 system_pods.go:89] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
	I0223 22:22:13.695993   80620 system_pods.go:89] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:22:13.695999   80620 system_pods.go:89] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:22:13.696005   80620 system_pods.go:89] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:22:13.696012   80620 system_pods.go:89] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:22:13.696020   80620 system_pods.go:89] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
	I0223 22:22:13.696028   80620 system_pods.go:89] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:22:13.696040   80620 system_pods.go:89] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:22:13.696048   80620 system_pods.go:89] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:22:13.696055   80620 system_pods.go:89] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
	I0223 22:22:13.696061   80620 system_pods.go:89] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:22:13.696071   80620 system_pods.go:126] duration metric: took 205.005964ms to wait for k8s-apps to be running ...
	I0223 22:22:13.696085   80620 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:22:13.696135   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:22:13.709623   80620 system_svc.go:56] duration metric: took 13.531533ms WaitForService to wait for kubelet.
	I0223 22:22:13.709679   80620 kubeadm.go:578] duration metric: took 15.016875282s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:22:13.709713   80620 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:22:13.887138   80620 request.go:622] Waited for 177.351024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0223 22:22:13.887250   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0223 22:22:13.887261   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.887269   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.887276   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.889579   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:13.889601   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.889608   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.889614   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.889620   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.889625   80620 round_trippers.go:580]     Audit-Id: 4402b5a7-68c0-489c-bf87-bedbd28a14fe
	I0223 22:22:13.889631   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.889636   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.889855   80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16192 chars]
	I0223 22:22:13.890436   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890455   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890468   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890474   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890481   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890489   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890496   80620 node_conditions.go:105] duration metric: took 180.777399ms to run NodePressure ...
	I0223 22:22:13.890512   80620 start.go:228] waiting for startup goroutines ...
	I0223 22:22:13.890522   80620 start.go:233] waiting for cluster config update ...
	I0223 22:22:13.890533   80620 start.go:242] writing updated cluster config ...
	I0223 22:22:13.890966   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:13.891077   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:13.893728   80620 out.go:177] * Starting worker node multinode-773885-m02 in cluster multinode-773885
	I0223 22:22:13.895212   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:22:13.895236   80620 cache.go:57] Caching tarball of preloaded images
	I0223 22:22:13.895333   80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:22:13.895345   80620 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:22:13.895468   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:13.895625   80620 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:22:13.895655   80620 start.go:364] acquiring machines lock for multinode-773885-m02: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 22:22:13.895705   80620 start.go:368] acquired machines lock for "multinode-773885-m02" in 30.081µs
	I0223 22:22:13.895724   80620 start.go:96] Skipping create...Using existing machine configuration
	I0223 22:22:13.895732   80620 fix.go:55] fixHost starting: m02
	I0223 22:22:13.896010   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:13.896038   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:13.910341   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0223 22:22:13.910796   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:13.911318   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:22:13.911343   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:13.911672   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:13.911860   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:13.911979   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
	I0223 22:22:13.913566   80620 fix.go:103] recreateIfNeeded on multinode-773885-m02: state=Stopped err=<nil>
	I0223 22:22:13.913585   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	W0223 22:22:13.913746   80620 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 22:22:13.915708   80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885-m02" ...
	I0223 22:22:13.917009   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .Start
	I0223 22:22:13.917151   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring networks are active...
	I0223 22:22:13.917783   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network default is active
	I0223 22:22:13.918134   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network mk-multinode-773885 is active
	I0223 22:22:13.918457   80620 main.go:141] libmachine: (multinode-773885-m02) Getting domain xml...
	I0223 22:22:13.919047   80620 main.go:141] libmachine: (multinode-773885-m02) Creating domain...
	I0223 22:22:15.148655   80620 main.go:141] libmachine: (multinode-773885-m02) Waiting to get IP...
	I0223 22:22:15.149521   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.149889   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.149974   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.149904   80738 retry.go:31] will retry after 193.258579ms: waiting for machine to come up
	I0223 22:22:15.344335   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.344701   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.344731   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.344650   80738 retry.go:31] will retry after 325.897575ms: waiting for machine to come up
	I0223 22:22:15.672194   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.672594   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.672628   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.672550   80738 retry.go:31] will retry after 464.389068ms: waiting for machine to come up
	I0223 22:22:16.138184   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:16.138690   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:16.138753   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.138682   80738 retry.go:31] will retry after 418.748231ms: waiting for machine to come up
	I0223 22:22:16.559096   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:16.559605   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:16.559635   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.559550   80738 retry.go:31] will retry after 471.42311ms: waiting for machine to come up
	I0223 22:22:17.033003   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:17.033388   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:17.033425   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.033349   80738 retry.go:31] will retry after 716.223287ms: waiting for machine to come up
	I0223 22:22:17.751192   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:17.751627   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:17.751662   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.751564   80738 retry.go:31] will retry after 829.526019ms: waiting for machine to come up
	I0223 22:22:18.582469   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:18.582861   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:18.582893   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:18.582810   80738 retry.go:31] will retry after 1.314736274s: waiting for machine to come up
	I0223 22:22:19.898527   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:19.898968   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:19.898996   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:19.898923   80738 retry.go:31] will retry after 1.848898641s: waiting for machine to come up
	I0223 22:22:21.749410   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:21.749799   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:21.749831   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:21.749746   80738 retry.go:31] will retry after 1.422968619s: waiting for machine to come up
	I0223 22:22:23.174280   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:23.174762   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:23.174796   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:23.174689   80738 retry.go:31] will retry after 2.26457317s: waiting for machine to come up
	I0223 22:22:25.440649   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:25.441040   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:25.441077   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:25.441025   80738 retry.go:31] will retry after 2.412299301s: waiting for machine to come up
	I0223 22:22:27.856562   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:27.857000   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:27.857029   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:27.856943   80738 retry.go:31] will retry after 3.510265055s: waiting for machine to come up
	I0223 22:22:31.369182   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.369590   80620 main.go:141] libmachine: (multinode-773885-m02) Found IP for machine: 192.168.39.102
	I0223 22:22:31.369622   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.369632   80620 main.go:141] libmachine: (multinode-773885-m02) Reserving static IP address...
	I0223 22:22:31.370012   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.370035   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"}
	I0223 22:22:31.370045   80620 main.go:141] libmachine: (multinode-773885-m02) Reserved static IP address: 192.168.39.102
	I0223 22:22:31.370056   80620 main.go:141] libmachine: (multinode-773885-m02) Waiting for SSH to be available...
	I0223 22:22:31.370068   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Getting to WaitForSSH function...
	I0223 22:22:31.372076   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.372417   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.372440   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.372551   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH client type: external
	I0223 22:22:31.372572   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa (-rw-------)
	I0223 22:22:31.372608   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 22:22:31.372622   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | About to run SSH command:
	I0223 22:22:31.372638   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | exit 0
	I0223 22:22:31.506747   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | SSH cmd err, output: <nil>: 
	I0223 22:22:31.507041   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetConfigRaw
	I0223 22:22:31.507719   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:31.510014   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.510356   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.510390   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.510652   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:31.510883   80620 machine.go:88] provisioning docker machine ...
	I0223 22:22:31.510909   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:31.511142   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.511321   80620 buildroot.go:166] provisioning hostname "multinode-773885-m02"
	I0223 22:22:31.511339   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.511489   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.513584   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.513939   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.513969   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.514122   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:31.514268   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.514404   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.514532   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:31.514655   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:31.515234   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:31.515255   80620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773885-m02 && echo "multinode-773885-m02" | sudo tee /etc/hostname
	I0223 22:22:31.655693   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885-m02
	
	I0223 22:22:31.655725   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.658407   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.658788   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.658815   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.658999   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:31.659184   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.659347   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.659464   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:31.659613   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:31.660176   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:31.660212   80620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773885-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773885-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:22:31.799792   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:22:31.799859   80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
	I0223 22:22:31.799879   80620 buildroot.go:174] setting up certificates
	I0223 22:22:31.799889   80620 provision.go:83] configureAuth start
	I0223 22:22:31.799902   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.800252   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:31.803534   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.803989   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.804018   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.804274   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.806753   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.807088   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.807121   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.807237   80620 provision.go:138] copyHostCerts
	I0223 22:22:31.807268   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:22:31.807311   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
	I0223 22:22:31.807324   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:22:31.807414   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
	I0223 22:22:31.807572   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:22:31.807597   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
	I0223 22:22:31.807602   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:22:31.807632   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
	I0223 22:22:31.807685   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:22:31.807702   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
	I0223 22:22:31.807707   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:22:31.807729   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
	I0223 22:22:31.807773   80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885-m02 san=[192.168.39.102 192.168.39.102 localhost 127.0.0.1 minikube multinode-773885-m02]
	I0223 22:22:32.063720   80620 provision.go:172] copyRemoteCerts
	I0223 22:22:32.063776   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:22:32.063800   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.066310   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.066712   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.066742   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.066876   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.067090   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.067230   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.067359   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:32.161807   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:22:32.161874   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 22:22:32.184819   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:22:32.184883   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 22:22:32.206537   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:22:32.206625   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 22:22:32.228031   80620 provision.go:86] duration metric: configureAuth took 428.129514ms
	I0223 22:22:32.228052   80620 buildroot.go:189] setting minikube options for container-runtime
	I0223 22:22:32.228295   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:32.228322   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:32.228634   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.231144   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.231489   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.231520   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.231601   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.231819   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.231999   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.232117   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.232312   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.232708   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.232719   80620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:22:32.365102   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0223 22:22:32.365122   80620 buildroot.go:70] root file system type: tmpfs
	I0223 22:22:32.365241   80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:22:32.365265   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.367818   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.368241   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.368263   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.368492   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.368703   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.368872   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.368982   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.369180   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.369581   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.369639   80620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.240"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:22:32.513495   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.240
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:22:32.513523   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.515906   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.516266   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.516300   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.516468   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.516680   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.516873   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.517028   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.517178   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.517625   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.517648   80620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:22:33.354684   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0223 22:22:33.354711   80620 machine.go:91] provisioned docker machine in 1.843811829s
	I0223 22:22:33.354721   80620 start.go:300] post-start starting for "multinode-773885-m02" (driver="kvm2")
	I0223 22:22:33.354729   80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:22:33.354752   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.355077   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:22:33.355108   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.357808   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.358150   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.358170   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.358307   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.358509   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.358697   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.358856   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.452337   80620 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:22:33.456207   80620 command_runner.go:130] > NAME=Buildroot
	I0223 22:22:33.456227   80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0223 22:22:33.456233   80620 command_runner.go:130] > ID=buildroot
	I0223 22:22:33.456241   80620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0223 22:22:33.456248   80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0223 22:22:33.456287   80620 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 22:22:33.456303   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
	I0223 22:22:33.456371   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
	I0223 22:22:33.456462   80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
	I0223 22:22:33.456474   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
	I0223 22:22:33.456577   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:22:33.464384   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:22:33.486196   80620 start.go:303] post-start completed in 131.456152ms
	I0223 22:22:33.486221   80620 fix.go:57] fixHost completed within 19.590489491s
	I0223 22:22:33.486246   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.488925   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.489233   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.489259   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.489444   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.489642   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.489819   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.489958   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.490087   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:33.490502   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:33.490517   80620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0223 22:22:33.619595   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190953.568894594
	
	I0223 22:22:33.619615   80620 fix.go:207] guest clock: 1677190953.568894594
	I0223 22:22:33.619622   80620 fix.go:220] Guest: 2023-02-23 22:22:33.568894594 +0000 UTC Remote: 2023-02-23 22:22:33.48622588 +0000 UTC m=+80.262153220 (delta=82.668714ms)
	I0223 22:22:33.619636   80620 fix.go:191] guest clock delta is within tolerance: 82.668714ms
	I0223 22:22:33.619643   80620 start.go:83] releasing machines lock for "multinode-773885-m02", held for 19.723927358s
	I0223 22:22:33.619668   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.619923   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:33.622598   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.623025   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.623058   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.625082   80620 out.go:177] * Found network options:
	I0223 22:22:33.626668   80620 out.go:177]   - NO_PROXY=192.168.39.240
	W0223 22:22:33.628011   80620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:22:33.628044   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628608   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628794   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628886   80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:22:33.628929   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	W0223 22:22:33.629039   80620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:22:33.629123   80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:22:33.629150   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.631754   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.631877   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632173   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.632199   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632233   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.632253   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632406   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.632530   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.632612   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.632687   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.632797   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.632952   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.632945   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.633068   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.747533   80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:22:33.748590   80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0223 22:22:33.748617   80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 22:22:33.748665   80620 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:22:33.752644   80620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:22:33.752772   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:22:33.762613   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:22:33.779129   80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:22:33.794495   80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0223 22:22:33.794614   80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 22:22:33.794634   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:22:33.794710   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:22:33.819645   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:22:33.819665   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:22:33.819671   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:22:33.819676   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:22:33.819680   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:22:33.819684   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:22:33.819688   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:22:33.819694   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:22:33.819697   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:22:33.819702   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:22:33.819707   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:22:33.821344   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:22:33.821366   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:22:33.821378   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:22:33.821513   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:22:33.838092   80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:22:33.838113   80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:22:33.838173   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:22:33.849104   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:22:33.860042   80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:22:33.860082   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:22:33.871017   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:22:33.881892   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:22:33.892548   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:22:33.903374   80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:22:33.914628   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:22:33.925877   80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:22:33.935581   80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:22:33.935636   80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:22:33.945618   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:34.050114   80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:22:34.068154   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:22:34.068229   80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:22:34.089986   80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0223 22:22:34.090009   80620 command_runner.go:130] > [Unit]
	I0223 22:22:34.090019   80620 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:22:34.090033   80620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:22:34.090041   80620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0223 22:22:34.090049   80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0223 22:22:34.090056   80620 command_runner.go:130] > StartLimitBurst=3
	I0223 22:22:34.090063   80620 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:22:34.090072   80620 command_runner.go:130] > [Service]
	I0223 22:22:34.090083   80620 command_runner.go:130] > Type=notify
	I0223 22:22:34.090089   80620 command_runner.go:130] > Restart=on-failure
	I0223 22:22:34.090104   80620 command_runner.go:130] > Environment=NO_PROXY=192.168.39.240
	I0223 22:22:34.090111   80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:22:34.090118   80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:22:34.090150   80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:22:34.090164   80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:22:34.090170   80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:22:34.090176   80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:22:34.090182   80620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:22:34.090190   80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:22:34.090196   80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:22:34.090200   80620 command_runner.go:130] > ExecStart=
	I0223 22:22:34.090213   80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0223 22:22:34.090219   80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:22:34.090224   80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:22:34.090233   80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:22:34.090237   80620 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:22:34.090241   80620 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:22:34.090245   80620 command_runner.go:130] > LimitCORE=infinity
	I0223 22:22:34.090251   80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:22:34.090256   80620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:22:34.090260   80620 command_runner.go:130] > TasksMax=infinity
	I0223 22:22:34.090265   80620 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:22:34.090273   80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:22:34.090279   80620 command_runner.go:130] > Delegate=yes
	I0223 22:22:34.090285   80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:22:34.090293   80620 command_runner.go:130] > KillMode=process
	I0223 22:22:34.090297   80620 command_runner.go:130] > [Install]
	I0223 22:22:34.090302   80620 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:22:34.090359   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:22:34.105030   80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0223 22:22:34.126591   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:22:34.140060   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:22:34.153929   80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 22:22:34.184699   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:22:34.197888   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:22:34.214560   80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:22:34.214588   80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:22:34.214922   80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:22:34.314415   80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:22:34.423777   80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:22:34.423812   80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:22:34.439350   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:34.539377   80620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:22:35.976151   80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.436733266s)
	I0223 22:22:35.976218   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:22:36.088366   80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:22:36.208338   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:22:36.318554   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:36.423882   80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:22:36.438700   80620 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0223 22:22:36.441277   80620 out.go:177] 
	W0223 22:22:36.442813   80620 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0223 22:22:36.442833   80620 out.go:239] * 
	W0223 22:22:36.443730   80620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 22:22:36.445382   80620 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-02-23 22:21:24 UTC, ends at Thu 2023-02-23 22:22:37 UTC. --
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653197396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653344660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653370552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653655096Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6c05479ab6bded8fa4b510984ebdaff14f9e940ce5f996cbbfa74f89cdf0e4df pid=2349 runtime=io.containerd.runc.v2
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976478317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976529296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976538800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.977357166Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/08db8c8fe66700151ca6e921ec0c7827f3f8b9da2185e6f9b77717b3db2213a2 pid=2641 runtime=io.containerd.runc.v2
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.562985619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563244746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563254901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563554212Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/17bc89f184c67734f2c7bf76e9475c45856ec85a6cc69703a04036b48218a306 pid=2718 runtime=io.containerd.runc.v2
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277252833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277345995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277367820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277588969Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9f2502586a39c34ac304fe5d1a3c0d2111c439b907e9f9955feec5ca5504872d pid=2837 runtime=io.containerd.runc.v2
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887734997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887789077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887798415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887932649Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ec64ae912e0437233e2ff6d3d8ed0b5e64201755fd0b86f988efacd563ac301c pid=2935 runtime=io.containerd.runc.v2
	Feb 23 22:22:26 multinode-773885 dockerd[827]: time="2023-02-23T22:22:26.143265689Z" level=info msg="ignoring event" container=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144112416Z" level=info msg="shim disconnected" id=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144166893Z" level=warning msg="cleaning up after shim disconnected" id=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a namespace=moby
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144202001Z" level=info msg="cleaning up dead shim"
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.167427651Z" level=warning msg="cleanup warnings time=\"2023-02-23T22:22:26Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3166 runtime=io.containerd.runc.v2\n"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	ec64ae912e043       8c811b4aec35f                                                                                         26 seconds ago      Running             busybox                   1                   9f2502586a39c
	17bc89f184c67       5185b96f0becf                                                                                         27 seconds ago      Running             coredns                   1                   08db8c8fe6670
	6c05479ab6bde       d6e3e26021b60                                                                                         39 seconds ago      Running             kindnet-cni               1                   e749663c5c7e7
	27a3e00db0cef       6e38f40d628db                                                                                         42 seconds ago      Exited              storage-provisioner       1                   bc303f21527d1
	9454f57758e35       46a6bb3c77ce0                                                                                         42 seconds ago      Running             kube-proxy                1                   7cce6a3412d50
	1e657e364abdc       fce326961ae2d                                                                                         48 seconds ago      Running             etcd                      1                   9832634b69a74
	efd94ac044a0a       655493523f607                                                                                         48 seconds ago      Running             kube-scheduler            1                   6464d18d96882
	6c70297f99403       e9c08e11b07f6                                                                                         48 seconds ago      Running             kube-controller-manager   1                   bff62e4487a30
	1f74fa3dd2e7b       deb04688c4a35                                                                                         48 seconds ago      Running             kube-apiserver            1                   4d2cd9fe6c8db
	80d446e21be45       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   ebbb7d19d9aa3
	a31cf43457e01       5185b96f0becf                                                                                         4 minutes ago       Exited              coredns                   0                   75e472928e30d
	f6b2b873cba93       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              4 minutes ago       Exited              kindnet-cni               0                   f284ce294fa00
	6becaf5c86404       46a6bb3c77ce0                                                                                         4 minutes ago       Exited              kube-proxy                0                   a2a9a29b5a412
	8d29ee663e61d       fce326961ae2d                                                                                         5 minutes ago       Exited              etcd                      0                   3b6e6d975efae
	baad115b76c60       655493523f607                                                                                         5 minutes ago       Exited              kube-scheduler            0                   072b5f08a10f2
	53723346fe3cc       e9c08e11b07f6                                                                                         5 minutes ago       Exited              kube-controller-manager   0                   979e703c6176a
	6a41aad932999       deb04688c4a35                                                                                         5 minutes ago       Exited              kube-apiserver            0                   745d6ec7adf4b
	
	* 
	* ==> coredns [17bc89f184c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:60321 - 9770 "HINFO IN 6662394053686617131.163874164669885542. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.069250639s
	
	* 
	* ==> coredns [a31cf43457e0] <==
	* [INFO] 10.244.1.2:47000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758837s
	[INFO] 10.244.1.2:44690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131616s
	[INFO] 10.244.1.2:37067 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011391s
	[INFO] 10.244.1.2:38424 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001108385s
	[INFO] 10.244.1.2:47838 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089356s
	[INFO] 10.244.1.2:41552 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106594s
	[INFO] 10.244.1.2:51630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135553s
	[INFO] 10.244.0.3:55853 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122391s
	[INFO] 10.244.0.3:35953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008752s
	[INFO] 10.244.0.3:56239 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083093s
	[INFO] 10.244.0.3:38385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083481s
	[INFO] 10.244.1.2:53920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283555s
	[INFO] 10.244.1.2:34363 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000773507s
	[INFO] 10.244.1.2:54662 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081096s
	[INFO] 10.244.1.2:48627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000266217s
	[INFO] 10.244.0.3:54203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197101s
	[INFO] 10.244.0.3:52399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162405s
	[INFO] 10.244.0.3:45614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000234431s
	[INFO] 10.244.0.3:47751 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134862s
	[INFO] 10.244.1.2:53869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201736s
	[INFO] 10.244.1.2:43680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175885s
	[INFO] 10.244.1.2:45494 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167403s
	[INFO] 10.244.1.2:52027 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017095s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-773885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=multinode-773885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T22_17_39_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773885
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:22:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:22:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    multinode-773885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1475187eff99446eb4f7e011051cc8fa
	  System UUID:                1475187e-ff99-446e-b4f7-e011051cc8fa
	  Boot ID:                    4d4d0a54-af2e-49a7-a9dd-250c866abcb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-9b7sp                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 coredns-787d4945fb-ktr7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m47s
	  kube-system                 etcd-multinode-773885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-p64zr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m47s
	  kube-system                 kube-apiserver-multinode-773885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-multinode-773885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-mdjks                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-scheduler-multinode-773885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m45s                  kube-proxy       
	  Normal  Starting                 41s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x5 over 5m12s)  kubelet          Node multinode-773885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x5 over 5m12s)  kubelet          Node multinode-773885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x5 over 5m12s)  kubelet          Node multinode-773885 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     4m59s                  kubelet          Node multinode-773885 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m59s                  kubelet          Node multinode-773885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s                  kubelet          Node multinode-773885 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m48s                  node-controller  Node multinode-773885 event: Registered Node multinode-773885 in Controller
	  Normal  NodeReady                4m36s                  kubelet          Node multinode-773885 status is now: NodeReady
	  Normal  Starting                 50s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)      kubelet          Node multinode-773885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)      kubelet          Node multinode-773885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 49s)      kubelet          Node multinode-773885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                    node-controller  Node multinode-773885 event: Registered Node multinode-773885 in Controller
	
	
	Name:               multinode-773885-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773885-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773885-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:20:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    multinode-773885-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb9064ecea5b4e79869f499ba8bce75c
	  System UUID:                fb9064ec-ea5b-4e79-869f-499ba8bce75c
	  Boot ID:                    4be4ac98-4af3-4b16-af45-9c05c30bb17d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-zscjg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-fg44s               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-proxy-5d5vn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x2 over 3m51s)  kubelet          Node multinode-773885-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x2 over 3m51s)  kubelet          Node multinode-773885-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x2 over 3m51s)  kubelet          Node multinode-773885-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                  node-controller  Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller
	  Normal  NodeReady                3m38s                  kubelet          Node multinode-773885-m02 status is now: NodeReady
	  Normal  RegisteredNode           31s                    node-controller  Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller
	
	
	Name:               multinode-773885-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773885-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:20:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773885-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:20:42 +0000   Thu, 23 Feb 2023 22:20:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:20:42 +0000   Thu, 23 Feb 2023 22:20:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:20:42 +0000   Thu, 23 Feb 2023 22:20:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:20:42 +0000   Thu, 23 Feb 2023 22:20:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-773885-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c591c169912649639566ebe598459857
	  System UUID:                c591c169-9126-4963-9566-ebe598459857
	  Boot ID:                    100c5981-611e-4766-903a-70dbe2627dfb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fbfsf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-psgdt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m51s (x2 over 2m51s)  kubelet          Node multinode-773885-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x2 over 2m51s)  kubelet          Node multinode-773885-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m51s (x2 over 2m51s)  kubelet          Node multinode-773885-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m38s                  kubelet          Node multinode-773885-m03 status is now: NodeReady
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x2 over 2m4s)    kubelet          Node multinode-773885-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x2 over 2m4s)    kubelet          Node multinode-773885-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x2 over 2m4s)    kubelet          Node multinode-773885-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                115s                   kubelet          Node multinode-773885-m03 status is now: NodeReady
	  Normal  RegisteredNode           31s                    node-controller  Node multinode-773885-m03 event: Registered Node multinode-773885-m03 in Controller
	
	* 
	* ==> dmesg <==
	* [Feb23 22:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.955731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.280486] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148289] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.553293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.692232] systemd-fstab-generator[510]: Ignoring "noauto" for root device
	[  +0.095720] systemd-fstab-generator[527]: Ignoring "noauto" for root device
	[  +1.185288] systemd-fstab-generator[758]: Ignoring "noauto" for root device
	[  +0.248453] systemd-fstab-generator[792]: Ignoring "noauto" for root device
	[  +0.102398] systemd-fstab-generator[803]: Ignoring "noauto" for root device
	[  +0.122364] systemd-fstab-generator[816]: Ignoring "noauto" for root device
	[  +1.531595] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.111043] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +0.104179] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +0.097652] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[ +11.667470] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +0.392417] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.206240] kauditd_printk_skb: 8 callbacks suppressed
	[Feb23 22:22] kauditd_printk_skb: 16 callbacks suppressed
	
	* 
	* ==> etcd [1e657e364abd] <==
	* {"level":"info","ts":"2023-02-23T22:21:50.930Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T22:21:50.930Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T22:21:50.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 switched to configuration voters=(2080375272429567737)"}
	{"level":"info","ts":"2023-02-23T22:21:50.932Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","added-peer-id":"1cdefa49b8abbef9","added-peer-peer-urls":["https://192.168.39.240:2380"]}
	{"level":"info","ts":"2023-02-23T22:21:50.933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:21:50.934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:21:50.954Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"1cdefa49b8abbef9","initial-advertise-peer-urls":["https://192.168.39.240:2380"],"listen-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.240:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:21:50.958Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgPreVoteResp from 1cdefa49b8abbef9 at term 2"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became candidate at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgVoteResp from 1cdefa49b8abbef9 at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became leader at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1cdefa49b8abbef9 elected leader 1cdefa49b8abbef9 at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.080Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"1cdefa49b8abbef9","local-member-attributes":"{Name:multinode-773885 ClientURLs:[https://192.168.39.240:2379]}","request-path":"/0/members/1cdefa49b8abbef9/attributes","cluster-id":"e0745912b0778b6e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:21:52.080Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:21:52.083Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.240:2379"}
	{"level":"info","ts":"2023-02-23T22:21:52.084Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [8d29ee663e61] <==
	* {"level":"info","ts":"2023-02-23T22:17:32.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgVoteResp from 1cdefa49b8abbef9 at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1cdefa49b8abbef9 elected leader 1cdefa49b8abbef9 at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.484Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:17:32.487Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"1cdefa49b8abbef9","local-member-attributes":"{Name:multinode-773885 ClientURLs:[https://192.168.39.240:2379]}","request-path":"/0/members/1cdefa49b8abbef9/attributes","cluster-id":"e0745912b0778b6e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:17:32.488Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:17:32.492Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.240:2379"}
	{"level":"info","ts":"2023-02-23T22:17:32.489Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:17:32.496Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:17:32.489Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:17:32.503Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:17:32.504Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:17:32.504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:17:32.507Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-02-23T22:18:39.794Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.910442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-23T22:18:39.794Z","caller":"traceutil/trace.go:171","msg":"trace[1229332276] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"155.153979ms","start":"2023-02-23T22:18:39.639Z","end":"2023-02-23T22:18:39.794Z","steps":["trace[1229332276] 'range keys from in-memory index tree'  (duration: 154.79846ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T22:19:39.387Z","caller":"traceutil/trace.go:171","msg":"trace[841849164] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"239.425375ms","start":"2023-02-23T22:19:39.147Z","end":"2023-02-23T22:19:39.387Z","steps":["trace[841849164] 'process raft request'  (duration: 239.262494ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T22:19:41.080Z","caller":"traceutil/trace.go:171","msg":"trace[146502320] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"106.873274ms","start":"2023-02-23T22:19:40.973Z","end":"2023-02-23T22:19:41.080Z","steps":["trace[146502320] 'process raft request'  (duration: 106.732936ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T22:20:45.246Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-23T22:20:45.246Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-773885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	{"level":"info","ts":"2023-02-23T22:20:45.273Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"1cdefa49b8abbef9"}
	{"level":"info","ts":"2023-02-23T22:20:45.277Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:20:45.285Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:20:45.285Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-773885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	
	* 
	* ==> kernel <==
	*  22:22:37 up 1 min,  0 users,  load average: 0.60, 0.19, 0.07
	Linux multinode-773885 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [6c05479ab6bd] <==
	* I0223 22:21:59.628331       1 main.go:227] handling current node
	I0223 22:21:59.629191       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:21:59.629202       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:21:59.629410       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.102 Flags: [] Table: 0} 
	I0223 22:21:59.629537       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:21:59.629545       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:21:59.629690       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0} 
	I0223 22:22:09.634203       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:09.634224       1 main.go:227] handling current node
	I0223 22:22:09.634233       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:09.634237       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:22:09.634329       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:22:09.634334       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:22:19.648879       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:19.649253       1 main.go:227] handling current node
	I0223 22:22:19.649329       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:19.649426       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:22:19.649553       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:22:19.649592       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:22:29.663056       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:29.663342       1 main.go:227] handling current node
	I0223 22:22:29.663589       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:29.663639       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:22:29.663927       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:22:29.663981       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [f6b2b873cba9] <==
	* I0223 22:20:08.782335       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:08.782366       1 main.go:227] handling current node
	I0223 22:20:08.782378       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:08.782383       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:08.782498       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:08.782503       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24] 
	I0223 22:20:18.789034       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:18.789102       1 main.go:227] handling current node
	I0223 22:20:18.789112       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:18.789118       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:18.789480       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:18.789490       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24] 
	I0223 22:20:28.797182       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:28.797218       1 main.go:227] handling current node
	I0223 22:20:28.797230       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:28.797238       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:28.797428       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:28.797438       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24] 
	I0223 22:20:38.808257       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:38.808531       1 main.go:227] handling current node
	I0223 22:20:38.808612       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:38.808735       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:38.808954       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:38.809162       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:20:38.809406       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [1f74fa3dd2e7] <==
	* I0223 22:21:53.767701       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0223 22:21:53.767780       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0223 22:21:53.763570       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0223 22:21:53.767927       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0223 22:21:53.807375       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0223 22:21:53.807485       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0223 22:21:53.845960       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:21:53.860908       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 22:21:53.860943       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:21:53.861339       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:21:53.865182       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:21:53.875653       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:21:53.875809       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:21:53.875948       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:21:53.875961       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 22:21:53.941378       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0223 22:21:54.514978       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:21:54.778557       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:21:56.611533       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:21:56.743211       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:21:56.752344       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:21:56.816590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:21:56.823384       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 22:22:06.886425       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 22:22:06.981775       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [6a41aad93299] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:20:55.126061       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:20:55.154966       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:20:55.192941       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [53723346fe3c] <==
	* I0223 22:18:04.424086       1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0223 22:18:46.708565       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m02" does not exist
	I0223 22:18:46.720411       1 range_allocator.go:372] Set node multinode-773885-m02 PodCIDR to [10.244.1.0/24]
	I0223 22:18:46.740966       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fg44s"
	I0223 22:18:46.741018       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5d5vn"
	W0223 22:18:49.432085       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-773885-m02. Assuming now as a timestamp.
	I0223 22:18:49.432675       1 event.go:294] "Event occurred" object="multinode-773885-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller"
	W0223 22:18:59.747513       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	I0223 22:19:02.090093       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 22:19:02.101165       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-zscjg"
	I0223 22:19:02.114911       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-9b7sp"
	I0223 22:19:04.450628       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-zscjg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-zscjg"
	W0223 22:19:46.421861       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	W0223 22:19:46.423059       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m03" does not exist
	I0223 22:19:46.438555       1 range_allocator.go:372] Set node multinode-773885-m03 PodCIDR to [10.244.2.0/24]
	I0223 22:19:46.456557       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-psgdt"
	I0223 22:19:46.456590       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fbfsf"
	I0223 22:19:49.459354       1 event.go:294] "Event occurred" object="multinode-773885-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773885-m03 event: Registered Node multinode-773885-m03 in Controller"
	W0223 22:19:49.460425       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-773885-m03. Assuming now as a timestamp.
	W0223 22:19:59.274458       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	W0223 22:20:33.012085       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	W0223 22:20:34.095715       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m03" does not exist
	W0223 22:20:34.096409       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	I0223 22:20:34.104228       1 range_allocator.go:372] Set node multinode-773885-m03 PodCIDR to [10.244.3.0/24]
	W0223 22:20:42.177970       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m03 node
	
	* 
	* ==> kube-controller-manager [6c70297f9940] <==
	* I0223 22:22:06.873909       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0223 22:22:06.874261       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0223 22:22:06.874514       1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0223 22:22:06.874727       1 shared_informer.go:280] Caches are synced for persistent volume
	I0223 22:22:06.874139       1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client
	I0223 22:22:06.874151       1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving
	I0223 22:22:06.885778       1 shared_informer.go:280] Caches are synced for namespace
	I0223 22:22:06.887045       1 shared_informer.go:280] Caches are synced for node
	I0223 22:22:06.887199       1 range_allocator.go:167] Sending events to api server.
	I0223 22:22:06.887268       1 range_allocator.go:171] Starting range CIDR allocator
	I0223 22:22:06.887457       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0223 22:22:06.887727       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0223 22:22:06.894791       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0223 22:22:06.902215       1 shared_informer.go:280] Caches are synced for attach detach
	I0223 22:22:06.907056       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0223 22:22:06.947594       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0223 22:22:06.985123       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:06.986536       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:07.004087       1 shared_informer.go:280] Caches are synced for crt configmap
	I0223 22:22:07.022102       1 shared_informer.go:280] Caches are synced for deployment
	I0223 22:22:07.024559       1 shared_informer.go:280] Caches are synced for disruption
	I0223 22:22:07.043836       1 shared_informer.go:280] Caches are synced for bootstrap_signer
	I0223 22:22:07.418122       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:22:07.418162       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 22:22:07.423312       1 shared_informer.go:280] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [6becaf5c8640] <==
	* I0223 22:17:52.428519       1 node.go:163] Successfully retrieved node IP: 192.168.39.240
	I0223 22:17:52.428776       1 server_others.go:109] "Detected node IP" address="192.168.39.240"
	I0223 22:17:52.429048       1 server_others.go:535] "Using iptables proxy"
	I0223 22:17:52.471955       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0223 22:17:52.472202       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:17:52.472334       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:17:52.472860       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:17:52.473096       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:17:52.473898       1 config.go:317] "Starting service config controller"
	I0223 22:17:52.474393       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:17:52.474564       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:17:52.474637       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:17:52.476441       1 config.go:444] "Starting node config controller"
	I0223 22:17:52.476591       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:17:52.575596       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:17:52.575638       1 shared_informer.go:280] Caches are synced for service config
	I0223 22:17:52.577063       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [9454f57758e3] <==
	* I0223 22:21:55.723163       1 node.go:163] Successfully retrieved node IP: 192.168.39.240
	I0223 22:21:55.729131       1 server_others.go:109] "Detected node IP" address="192.168.39.240"
	I0223 22:21:55.733751       1 server_others.go:535] "Using iptables proxy"
	I0223 22:21:56.081608       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0223 22:21:56.081932       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:21:56.083401       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:21:56.084774       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:21:56.203479       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:21:56.205085       1 config.go:317] "Starting service config controller"
	I0223 22:21:56.205493       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:21:56.205674       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:21:56.205782       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:21:56.206845       1 config.go:444] "Starting node config controller"
	I0223 22:21:56.208637       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:21:56.348283       1 shared_informer.go:280] Caches are synced for node config
	I0223 22:21:56.351314       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:21:56.363180       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [baad115b76c6] <==
	* W0223 22:17:34.610009       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:34.610030       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:34.611025       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:34.611092       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:34.613999       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:17:34.614066       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 22:17:34.614149       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 22:17:34.614173       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 22:17:34.614213       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 22:17:34.614265       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 22:17:35.487184       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:17:35.487376       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:17:35.632170       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:35.632547       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:35.721529       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:35.721738       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:35.755180       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 22:17:35.755382       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 22:17:35.761259       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 22:17:35.761432       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 22:17:36.073523       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 22:17:36.074101       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 22:17:38.782901       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0223 22:20:45.176065       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0223 22:20:45.176491       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [efd94ac044a0] <==
	* I0223 22:21:51.487920       1 serving.go:348] Generated self-signed cert in-memory
	W0223 22:21:53.821119       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0223 22:21:53.821286       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 22:21:53.821327       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0223 22:21:53.821848       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0223 22:21:53.856843       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0223 22:21:53.857373       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:21:53.859249       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0223 22:21:53.859546       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0223 22:21:53.860180       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 22:21:53.859587       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 22:21:53.960971       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-02-23 22:21:24 UTC, ends at Thu 2023-02-23 22:22:38 UTC. --
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.141211    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789777    1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789834    1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789892    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:21:58.789875256 +0000 UTC m=+11.061994009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:57 multinode-773885 kubelet[1292]: E0223 22:21:57.695471    1292 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 23 22:21:57 multinode-773885 kubelet[1292]: E0223 22:21:57.696044    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume podName:5337fe89-b5a2-4562-84e3-3a7e1f201ff5 nodeName:}" failed. No retries permitted until 2023-02-23 22:22:01.695966879 +0000 UTC m=+13.968085633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume") pod "coredns-787d4945fb-ktr7h" (UID: "5337fe89-b5a2-4562-84e3-3a7e1f201ff5") : object "kube-system"/"coredns" not registered
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.167577    1292 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: I0223 22:21:58.564631    1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e749663c5c7e738a06bd131433cc331bdfe0302f4ed8652dc72907fd84e75f7f"
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.592064    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808766    1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808798    1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808843    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:22:02.808830445 +0000 UTC m=+15.080949197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:59 multinode-773885 kubelet[1292]: E0223 22:21:59.637649    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:22:00 multinode-773885 kubelet[1292]: E0223 22:22:00.141319    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
	Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.140900    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.730126    1292 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.730215    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume podName:5337fe89-b5a2-4562-84e3-3a7e1f201ff5 nodeName:}" failed. No retries permitted until 2023-02-23 22:22:09.730200815 +0000 UTC m=+22.002319582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume") pod "coredns-787d4945fb-ktr7h" (UID: "5337fe89-b5a2-4562-84e3-3a7e1f201ff5") : object "kube-system"/"coredns" not registered
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.141217    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838248    1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838298    1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838347    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:22:10.838331472 +0000 UTC m=+23.110450224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:22:03 multinode-773885 kubelet[1292]: E0223 22:22:03.140982    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:22:26 multinode-773885 kubelet[1292]: I0223 22:22:26.975727    1292 scope.go:115] "RemoveContainer" containerID="b83daa4cdd8d8298126a07aab8f78401afc75993bca101cbb72ec10217214496"
	Feb 23 22:22:26 multinode-773885 kubelet[1292]: I0223 22:22:26.976270    1292 scope.go:115] "RemoveContainer" containerID="27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a"
	Feb 23 22:22:26 multinode-773885 kubelet[1292]: E0223 22:22:26.976460    1292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62cc7ef3-a47f-45ce-a9af-cf4de3e1824d)\"" pod="kube-system/storage-provisioner" podUID=62cc7ef3-a47f-45ce-a9af-cf4de3e1824d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-773885 -n multinode-773885
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-773885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (114.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 node delete m03
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr
multinode_test.go:398: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr: exit status 2 (406.201456ms)

                                                
                                                
-- stdout --
	multinode-773885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773885-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:22:39.565896   80992 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:22:39.566373   80992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:22:39.566390   80992 out.go:309] Setting ErrFile to fd 2...
	I0223 22:22:39.566398   80992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:22:39.566676   80992 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:22:39.566943   80992 out.go:303] Setting JSON to false
	I0223 22:22:39.566987   80992 mustload.go:65] Loading cluster: multinode-773885
	I0223 22:22:39.567073   80992 notify.go:220] Checking for updates...
	I0223 22:22:39.567880   80992 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:39.567899   80992 status.go:255] checking status of multinode-773885 ...
	I0223 22:22:39.568331   80992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:39.568372   80992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:39.583405   80992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0223 22:22:39.583838   80992 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:39.584347   80992 main.go:141] libmachine: Using API Version  1
	I0223 22:22:39.584372   80992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:39.584673   80992 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:39.584842   80992 main.go:141] libmachine: (multinode-773885) Calling .GetState
	I0223 22:22:39.586607   80992 status.go:330] multinode-773885 host status = "Running" (err=<nil>)
	I0223 22:22:39.586622   80992 host.go:66] Checking if "multinode-773885" exists ...
	I0223 22:22:39.586969   80992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:39.587026   80992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:39.601199   80992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0223 22:22:39.601530   80992 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:39.601981   80992 main.go:141] libmachine: Using API Version  1
	I0223 22:22:39.602007   80992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:39.602300   80992 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:39.602465   80992 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:22:39.605137   80992 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:22:39.605541   80992 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:22:39.605578   80992 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:22:39.605711   80992 host.go:66] Checking if "multinode-773885" exists ...
	I0223 22:22:39.605978   80992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:39.606013   80992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:39.619589   80992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0223 22:22:39.619912   80992 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:39.620327   80992 main.go:141] libmachine: Using API Version  1
	I0223 22:22:39.620346   80992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:39.620606   80992 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:39.620794   80992 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:22:39.620951   80992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:22:39.620972   80992 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:22:39.623557   80992 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:22:39.623985   80992 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:22:39.624009   80992 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:22:39.624166   80992 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:22:39.624315   80992 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:22:39.624490   80992 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:22:39.624630   80992 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:22:39.710146   80992 ssh_runner.go:195] Run: systemctl --version
	I0223 22:22:39.715316   80992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:22:39.729945   80992 kubeconfig.go:92] found "multinode-773885" server: "https://192.168.39.240:8443"
	I0223 22:22:39.729973   80992 api_server.go:165] Checking apiserver status ...
	I0223 22:22:39.729998   80992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:22:39.741711   80992 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup
	I0223 22:22:39.750867   80992 api_server.go:181] apiserver freezer: "2:freezer:/kubepods/burstable/pode9459d167995578fa153c781fb0ec958/1f74fa3dd2e7b08fd893ff25cb2bf0d53382acb7afe69ac4140e539c9cb80367"
	I0223 22:22:39.750916   80992 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pode9459d167995578fa153c781fb0ec958/1f74fa3dd2e7b08fd893ff25cb2bf0d53382acb7afe69ac4140e539c9cb80367/freezer.state
	I0223 22:22:39.760187   80992 api_server.go:203] freezer state: "THAWED"
	I0223 22:22:39.760209   80992 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:22:39.765792   80992 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:22:39.765818   80992 status.go:421] multinode-773885 apiserver status = Running (err=<nil>)
	I0223 22:22:39.765827   80992 status.go:257] multinode-773885 status: &{Name:multinode-773885 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:22:39.765840   80992 status.go:255] checking status of multinode-773885-m02 ...
	I0223 22:22:39.766121   80992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:39.766152   80992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:39.780243   80992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0223 22:22:39.780677   80992 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:39.781131   80992 main.go:141] libmachine: Using API Version  1
	I0223 22:22:39.781150   80992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:39.781494   80992 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:39.781693   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
	I0223 22:22:39.783201   80992 status.go:330] multinode-773885-m02 host status = "Running" (err=<nil>)
	I0223 22:22:39.783225   80992 host.go:66] Checking if "multinode-773885-m02" exists ...
	I0223 22:22:39.783523   80992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:39.783557   80992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:39.797671   80992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0223 22:22:39.798031   80992 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:39.798513   80992 main.go:141] libmachine: Using API Version  1
	I0223 22:22:39.798534   80992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:39.798830   80992 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:39.798991   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:39.801814   80992 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:39.802294   80992 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:39.802322   80992 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:39.802566   80992 host.go:66] Checking if "multinode-773885-m02" exists ...
	I0223 22:22:39.802873   80992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:39.802912   80992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:39.816906   80992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I0223 22:22:39.817246   80992 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:39.817667   80992 main.go:141] libmachine: Using API Version  1
	I0223 22:22:39.817691   80992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:39.817968   80992 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:39.818180   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:39.818333   80992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:22:39.818350   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:39.820977   80992 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:39.821374   80992 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:39.821413   80992 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:39.821546   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:39.821708   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:39.821834   80992 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:39.822000   80992 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:39.914053   80992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:22:39.927012   80992 status.go:257] multinode-773885-m02 status: &{Name:multinode-773885-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:400: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-773885 -n multinode-773885
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-773885 logs -n 25: (1.35329252s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885:/home/docker/cp-test_multinode-773885-m02_multinode-773885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885 sudo cat                                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m02_multinode-773885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03:/home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885-m03 sudo cat                                   | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp testdata/cp-test.txt                                                | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885:/home/docker/cp-test_multinode-773885-m03_multinode-773885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885 sudo cat                                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m03_multinode-773885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt                       | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m02:/home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n                                                                 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | multinode-773885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773885 ssh -n multinode-773885-m02 sudo cat                                   | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | /home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-773885 node stop m03                                                          | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	| node    | multinode-773885 node start                                                             | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-773885                                                                | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC |                     |
	| stop    | -p multinode-773885                                                                     | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:21 UTC |
	| start   | -p multinode-773885                                                                     | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:21 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-773885                                                                | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:22 UTC |                     |
	| node    | multinode-773885 node delete                                                            | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:22 UTC | 23 Feb 23 22:22 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 22:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 22:21:13.262206   80620 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:21:13.262485   80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:21:13.262530   80620 out.go:309] Setting ErrFile to fd 2...
	I0223 22:21:13.262547   80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:21:13.263007   80620 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:21:13.263577   80620 out.go:303] Setting JSON to false
	I0223 22:21:13.264336   80620 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7426,"bootTime":1677183448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:21:13.264396   80620 start.go:135] virtualization: kvm guest
	I0223 22:21:13.267622   80620 out.go:177] * [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:21:13.268914   80620 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:21:13.268968   80620 notify.go:220] Checking for updates...
	I0223 22:21:13.270444   80620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:21:13.271889   80620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:13.273288   80620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 22:21:13.274630   80620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:21:13.275971   80620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:21:13.277689   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:13.277751   80620 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:21:13.278270   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:21:13.278328   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:21:13.292096   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0223 22:21:13.292502   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:21:13.293077   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:21:13.293100   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:21:13.293421   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:21:13.293604   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:13.326142   80620 out.go:177] * Using the kvm2 driver based on existing profile
	I0223 22:21:13.327601   80620 start.go:296] selected driver: kvm2
	I0223 22:21:13.327615   80620 start.go:857] validating driver "kvm2" against &{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP:}
	I0223 22:21:13.327745   80620 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:21:13.327989   80620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:21:13.328051   80620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-59858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 22:21:13.341443   80620 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 22:21:13.342073   80620 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 22:21:13.342106   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:13.342116   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:13.342128   80620 start_flags.go:319] config:
	{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:21:13.342256   80620 iso.go:125] acquiring lock: {Name:mka4f25d544a3ff8c2a2fab814177dd4b23f9fc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:21:13.344079   80620 out.go:177] * Starting control plane node multinode-773885 in cluster multinode-773885
	I0223 22:21:13.345362   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:13.345394   80620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 22:21:13.345409   80620 cache.go:57] Caching tarball of preloaded images
	I0223 22:21:13.345481   80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:21:13.345493   80620 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:21:13.345663   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:21:13.345836   80620 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:21:13.345858   80620 start.go:364] acquiring machines lock for multinode-773885: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 22:21:13.345897   80620 start.go:368] acquired machines lock for "multinode-773885" in 21.539µs
	I0223 22:21:13.345910   80620 start.go:96] Skipping create...Using existing machine configuration
	I0223 22:21:13.345916   80620 fix.go:55] fixHost starting: 
	I0223 22:21:13.346182   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:21:13.346210   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:21:13.358898   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0223 22:21:13.359326   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:21:13.359874   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:21:13.359895   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:21:13.360176   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:21:13.360338   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:13.360464   80620 main.go:141] libmachine: (multinode-773885) Calling .GetState
	I0223 22:21:13.361968   80620 fix.go:103] recreateIfNeeded on multinode-773885: state=Stopped err=<nil>
	I0223 22:21:13.361991   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	W0223 22:21:13.362122   80620 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 22:21:13.364431   80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885" ...
	I0223 22:21:13.365638   80620 main.go:141] libmachine: (multinode-773885) Calling .Start
	I0223 22:21:13.365789   80620 main.go:141] libmachine: (multinode-773885) Ensuring networks are active...
	I0223 22:21:13.366413   80620 main.go:141] libmachine: (multinode-773885) Ensuring network default is active
	I0223 22:21:13.366726   80620 main.go:141] libmachine: (multinode-773885) Ensuring network mk-multinode-773885 is active
	I0223 22:21:13.367088   80620 main.go:141] libmachine: (multinode-773885) Getting domain xml...
	I0223 22:21:13.367766   80620 main.go:141] libmachine: (multinode-773885) Creating domain...
	I0223 22:21:14.564410   80620 main.go:141] libmachine: (multinode-773885) Waiting to get IP...
	I0223 22:21:14.565318   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:14.565709   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:14.565811   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.565729   80650 retry.go:31] will retry after 216.926568ms: waiting for machine to come up
	I0223 22:21:14.784224   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:14.784682   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:14.784711   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.784633   80650 retry.go:31] will retry after 249.246042ms: waiting for machine to come up
	I0223 22:21:15.035098   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.035423   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.035451   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.035397   80650 retry.go:31] will retry after 334.153469ms: waiting for machine to come up
	I0223 22:21:15.370820   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.371326   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.371360   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.371252   80650 retry.go:31] will retry after 394.396319ms: waiting for machine to come up
	I0223 22:21:15.766773   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:15.767259   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:15.767292   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.767204   80650 retry.go:31] will retry after 580.71112ms: waiting for machine to come up
	I0223 22:21:16.350049   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:16.350438   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:16.350468   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:16.350387   80650 retry.go:31] will retry after 812.475241ms: waiting for machine to come up
	I0223 22:21:17.164302   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:17.164761   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:17.164794   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:17.164713   80650 retry.go:31] will retry after 1.090615613s: waiting for machine to come up
	I0223 22:21:18.257489   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:18.257882   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:18.257949   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:18.257850   80650 retry.go:31] will retry after 1.207436911s: waiting for machine to come up
	I0223 22:21:19.467391   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:19.467804   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:19.467836   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:19.467758   80650 retry.go:31] will retry after 1.522373862s: waiting for machine to come up
	I0223 22:21:20.992569   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:20.992936   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:20.992965   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:20.992883   80650 retry.go:31] will retry after 2.133891724s: waiting for machine to come up
	I0223 22:21:23.129156   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:23.129626   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:23.129648   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:23.129597   80650 retry.go:31] will retry after 2.398257467s: waiting for machine to come up
	I0223 22:21:25.529031   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:25.529472   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:25.529508   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:25.529418   80650 retry.go:31] will retry after 2.616816039s: waiting for machine to come up
	I0223 22:21:28.149307   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:28.149703   80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
	I0223 22:21:28.149732   80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:28.149668   80650 retry.go:31] will retry after 3.093858159s: waiting for machine to come up
	I0223 22:21:31.245491   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.245970   80620 main.go:141] libmachine: (multinode-773885) Found IP for machine: 192.168.39.240
	I0223 22:21:31.245992   80620 main.go:141] libmachine: (multinode-773885) Reserving static IP address...
	I0223 22:21:31.246035   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has current primary IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.246498   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.246523   80620 main.go:141] libmachine: (multinode-773885) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"}
	I0223 22:21:31.246531   80620 main.go:141] libmachine: (multinode-773885) Reserved static IP address: 192.168.39.240
	I0223 22:21:31.246540   80620 main.go:141] libmachine: (multinode-773885) Waiting for SSH to be available...
	I0223 22:21:31.246549   80620 main.go:141] libmachine: (multinode-773885) DBG | Getting to WaitForSSH function...
	I0223 22:21:31.248477   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.248821   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.248848   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.248945   80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH client type: external
	I0223 22:21:31.248970   80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa (-rw-------)
	I0223 22:21:31.249043   80620 main.go:141] libmachine: (multinode-773885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 22:21:31.249076   80620 main.go:141] libmachine: (multinode-773885) DBG | About to run SSH command:
	I0223 22:21:31.249094   80620 main.go:141] libmachine: (multinode-773885) DBG | exit 0
	I0223 22:21:31.338971   80620 main.go:141] libmachine: (multinode-773885) DBG | SSH cmd err, output: <nil>: 
	I0223 22:21:31.339315   80620 main.go:141] libmachine: (multinode-773885) Calling .GetConfigRaw
	I0223 22:21:31.339952   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:31.342708   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.343091   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.343112   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.343382   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:21:31.343587   80620 machine.go:88] provisioning docker machine ...
	I0223 22:21:31.343612   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:31.343856   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.344026   80620 buildroot.go:166] provisioning hostname "multinode-773885"
	I0223 22:21:31.344045   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.344189   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.346343   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.346741   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.346772   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.346912   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.347101   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.347235   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.347362   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.347563   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.347987   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.348001   80620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773885 && echo "multinode-773885" | sudo tee /etc/hostname
	I0223 22:21:31.483698   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885
	
	I0223 22:21:31.483729   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.486353   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.486705   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.486729   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.486927   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.487146   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.487349   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.487567   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.487765   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.488223   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.488247   80620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:21:31.610531   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:21:31.610563   80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
	I0223 22:21:31.610579   80620 buildroot.go:174] setting up certificates
	I0223 22:21:31.610589   80620 provision.go:83] configureAuth start
	I0223 22:21:31.610602   80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
	I0223 22:21:31.610887   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:31.613554   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.613875   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.613901   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.614087   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.616271   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.616732   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.616766   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.616828   80620 provision.go:138] copyHostCerts
	I0223 22:21:31.616880   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:21:31.616925   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
	I0223 22:21:31.616938   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:21:31.617049   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
	I0223 22:21:31.617142   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:21:31.617171   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
	I0223 22:21:31.617182   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:21:31.617225   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
	I0223 22:21:31.617338   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:21:31.617367   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
	I0223 22:21:31.617373   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:21:31.617412   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
	I0223 22:21:31.617475   80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-773885]
	I0223 22:21:31.813280   80620 provision.go:172] copyRemoteCerts
	I0223 22:21:31.813353   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:21:31.813402   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.816285   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.816679   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.816716   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.816918   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.817162   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.817351   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.817481   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:31.903913   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:21:31.904023   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 22:21:31.928843   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:21:31.928908   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 22:21:31.953083   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:21:31.953136   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:21:31.977825   80620 provision.go:86] duration metric: configureAuth took 367.222576ms
	I0223 22:21:31.977848   80620 buildroot.go:189] setting minikube options for container-runtime
	I0223 22:21:31.978069   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:31.978096   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:31.978344   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:31.980808   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.981196   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:31.981226   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:31.981404   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:31.981631   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.981794   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:31.981903   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:31.982052   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:31.982469   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:31.982488   80620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:21:32.100345   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0223 22:21:32.100366   80620 buildroot.go:70] root file system type: tmpfs
	I0223 22:21:32.100467   80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:21:32.100489   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:32.103003   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.103407   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:32.103436   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.103637   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:32.103824   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.103965   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.104148   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:32.104371   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:32.104858   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:32.104953   80620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:21:32.237312   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:21:32.237343   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:32.240081   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.240430   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:32.240481   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:32.240599   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:32.240764   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.240928   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:32.241022   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:32.241158   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:32.241558   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:32.241575   80620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:21:33.112176   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0223 22:21:33.112206   80620 machine.go:91] provisioned docker machine in 1.76860164s
	I0223 22:21:33.112216   80620 start.go:300] post-start starting for "multinode-773885" (driver="kvm2")
	I0223 22:21:33.112222   80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:21:33.112238   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.112595   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:21:33.112636   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.115711   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.116122   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.116159   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.116274   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.116476   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.116715   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.116933   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.204860   80620 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:21:33.208799   80620 command_runner.go:130] > NAME=Buildroot
	I0223 22:21:33.208819   80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0223 22:21:33.208823   80620 command_runner.go:130] > ID=buildroot
	I0223 22:21:33.208829   80620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0223 22:21:33.208833   80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0223 22:21:33.208858   80620 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 22:21:33.208867   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
	I0223 22:21:33.208924   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
	I0223 22:21:33.208996   80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
	I0223 22:21:33.209017   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
	I0223 22:21:33.209096   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:21:33.216834   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:21:33.238598   80620 start.go:303] post-start completed in 126.369412ms
	I0223 22:21:33.238618   80620 fix.go:57] fixHost completed within 19.892701007s
	I0223 22:21:33.238638   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.241628   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.242000   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.242020   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.242184   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.242377   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.242544   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.242697   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.242867   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:21:33.243253   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0223 22:21:33.243264   80620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0223 22:21:33.359558   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190893.310436860
	
	I0223 22:21:33.359587   80620 fix.go:207] guest clock: 1677190893.310436860
	I0223 22:21:33.359596   80620 fix.go:220] Guest: 2023-02-23 22:21:33.31043686 +0000 UTC Remote: 2023-02-23 22:21:33.238622371 +0000 UTC m=+20.014549698 (delta=71.814489ms)
	I0223 22:21:33.359621   80620 fix.go:191] guest clock delta is within tolerance: 71.814489ms
	I0223 22:21:33.359628   80620 start.go:83] releasing machines lock for "multinode-773885", held for 20.013722401s
	I0223 22:21:33.359654   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.359925   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:33.362448   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.362830   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.362872   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.362979   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363495   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363673   80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:21:33.363761   80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:21:33.363798   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.363978   80620 ssh_runner.go:195] Run: cat /version.json
	I0223 22:21:33.364008   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:21:33.366567   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.366853   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.366894   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.366918   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.367103   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.367284   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.367338   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:33.367363   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:33.367483   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.367511   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:21:33.367637   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.367796   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:21:33.367946   80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:21:33.368088   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:21:33.472525   80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:21:33.472587   80620 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
	I0223 22:21:33.472717   80620 ssh_runner.go:195] Run: systemctl --version
	I0223 22:21:33.478170   80620 command_runner.go:130] > systemd 247 (247)
	I0223 22:21:33.478214   80620 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0223 22:21:33.478449   80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:21:33.483322   80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0223 22:21:33.483517   80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 22:21:33.483559   80620 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:21:33.486877   80620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:21:33.486963   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:21:33.494937   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:21:33.509789   80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:21:33.522704   80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0223 22:21:33.523037   80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 22:21:33.523053   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:33.523114   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:33.547334   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:33.547357   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:33.547366   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:33.547373   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:33.547379   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:33.547386   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:33.547393   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:33.547402   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:33.547409   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:33.547429   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:33.547437   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:33.548840   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:33.548856   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:21:33.548865   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:21:33.548962   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:21:33.565249   80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:21:33.565271   80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:21:33.565339   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:21:33.574475   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:21:33.582936   80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:21:33.582977   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:21:33.591609   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:21:33.600301   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:21:33.608920   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:21:33.617470   80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:21:33.626224   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:21:33.634536   80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:21:33.642631   80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:21:33.642679   80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:21:33.650322   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:33.748276   80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:21:33.765231   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:21:33.765298   80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:21:33.783055   80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0223 22:21:33.783552   80620 command_runner.go:130] > [Unit]
	I0223 22:21:33.783568   80620 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:21:33.783574   80620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:21:33.783579   80620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0223 22:21:33.783584   80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0223 22:21:33.783589   80620 command_runner.go:130] > StartLimitBurst=3
	I0223 22:21:33.783595   80620 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:21:33.783598   80620 command_runner.go:130] > [Service]
	I0223 22:21:33.783603   80620 command_runner.go:130] > Type=notify
	I0223 22:21:33.783607   80620 command_runner.go:130] > Restart=on-failure
	I0223 22:21:33.783614   80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:21:33.783625   80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:21:33.783631   80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:21:33.783640   80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:21:33.783647   80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:21:33.783653   80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:21:33.783660   80620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:21:33.783668   80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:21:33.783674   80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:21:33.783678   80620 command_runner.go:130] > ExecStart=
	I0223 22:21:33.783691   80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0223 22:21:33.783696   80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:21:33.783702   80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:21:33.783708   80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:21:33.783712   80620 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:21:33.783715   80620 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:21:33.783719   80620 command_runner.go:130] > LimitCORE=infinity
	I0223 22:21:33.783724   80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:21:33.783728   80620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:21:33.783733   80620 command_runner.go:130] > TasksMax=infinity
	I0223 22:21:33.783736   80620 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:21:33.783742   80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:21:33.783746   80620 command_runner.go:130] > Delegate=yes
	I0223 22:21:33.783751   80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:21:33.783755   80620 command_runner.go:130] > KillMode=process
	I0223 22:21:33.783758   80620 command_runner.go:130] > [Install]
	I0223 22:21:33.783765   80620 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:21:33.784203   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:21:33.800310   80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0223 22:21:33.820089   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:21:33.831934   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:21:33.843320   80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 22:21:33.870509   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:21:33.882768   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:21:33.898405   80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:21:33.898433   80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:21:33.898700   80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:21:33.998916   80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:21:34.101490   80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:21:34.101526   80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:21:34.117559   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:34.221898   80620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:21:35.643194   80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.421256026s)
	I0223 22:21:35.643291   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:21:35.759716   80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:21:35.863224   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:21:35.965951   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:21:36.072240   80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:21:36.092427   80620 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:21:36.092508   80620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:21:36.104108   80620 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:21:36.104128   80620 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:21:36.104134   80620 command_runner.go:130] > Device: 16h/22d	Inode: 814         Links: 1
	I0223 22:21:36.104143   80620 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0223 22:21:36.104156   80620 command_runner.go:130] > Access: 2023-02-23 22:21:36.038985633 +0000
	I0223 22:21:36.104168   80620 command_runner.go:130] > Modify: 2023-02-23 22:21:36.038985633 +0000
	I0223 22:21:36.104180   80620 command_runner.go:130] > Change: 2023-02-23 22:21:36.041985633 +0000
	I0223 22:21:36.104189   80620 command_runner.go:130] >  Birth: -
	I0223 22:21:36.104213   80620 start.go:553] Will wait 60s for crictl version
	I0223 22:21:36.104260   80620 ssh_runner.go:195] Run: which crictl
	I0223 22:21:36.110223   80620 command_runner.go:130] > /usr/bin/crictl
	I0223 22:21:36.110588   80620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:21:36.185549   80620 command_runner.go:130] > Version:  0.1.0
	I0223 22:21:36.185577   80620 command_runner.go:130] > RuntimeName:  docker
	I0223 22:21:36.185585   80620 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0223 22:21:36.185593   80620 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:21:36.185626   80620 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0223 22:21:36.185698   80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:21:36.217919   80620 command_runner.go:130] > 20.10.23
	I0223 22:21:36.219196   80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:21:36.248973   80620 command_runner.go:130] > 20.10.23
	I0223 22:21:36.253095   80620 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0223 22:21:36.253136   80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:21:36.255830   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:36.256233   80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:21:36.256260   80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:21:36.256492   80620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0223 22:21:36.260126   80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:21:36.272218   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:21:36.272269   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:36.294497   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:36.294518   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:36.294523   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:36.294528   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:36.294532   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:36.294536   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:36.294541   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:36.294546   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:36.294550   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:36.294554   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:36.294558   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:36.295537   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:36.295553   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:21:36.295600   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:21:36.317087   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:21:36.317104   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:21:36.317109   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:21:36.317114   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:21:36.317119   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:21:36.317123   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:21:36.317127   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:21:36.317133   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:21:36.317137   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:21:36.317142   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:21:36.317149   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:21:36.318116   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:21:36.318131   80620 cache_images.go:84] Images are preloaded, skipping loading
	I0223 22:21:36.318198   80620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:21:36.351288   80620 command_runner.go:130] > cgroupfs
	I0223 22:21:36.352347   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:36.352366   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:36.352384   80620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:21:36.352404   80620 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773885 NodeName:multinode-773885 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:21:36.352535   80620 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-773885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:21:36.352608   80620 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-773885 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:21:36.352654   80620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:21:36.361734   80620 command_runner.go:130] > kubeadm
	I0223 22:21:36.361745   80620 command_runner.go:130] > kubectl
	I0223 22:21:36.361749   80620 command_runner.go:130] > kubelet
	I0223 22:21:36.361984   80620 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:21:36.362045   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 22:21:36.369631   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0223 22:21:36.384815   80620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:21:36.399471   80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0223 22:21:36.414791   80620 ssh_runner.go:195] Run: grep 192.168.39.240	control-plane.minikube.internal$ /etc/hosts
	I0223 22:21:36.418133   80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:21:36.429567   80620 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885 for IP: 192.168.39.240
	I0223 22:21:36.429596   80620 certs.go:186] acquiring lock for shared ca certs: {Name:mkb47a35d7b33f6ba829c92dc16cfaf70cb716c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:36.429732   80620 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key
	I0223 22:21:36.429768   80620 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key
	I0223 22:21:36.429863   80620 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key
	I0223 22:21:36.429933   80620 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key.ac2ca5a7
	I0223 22:21:36.429971   80620 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key
	I0223 22:21:36.429982   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 22:21:36.429999   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 22:21:36.430009   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 22:21:36.430023   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 22:21:36.430035   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:21:36.430047   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:21:36.430058   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:21:36.430070   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:21:36.430120   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem (1338 bytes)
	W0223 22:21:36.430145   80620 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927_empty.pem, impossibly tiny 0 bytes
	I0223 22:21:36.430155   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:21:36.430178   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem (1078 bytes)
	I0223 22:21:36.430200   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:21:36.430224   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem (1671 bytes)
	I0223 22:21:36.430265   80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:21:36.430293   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.430307   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.430319   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem -> /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.430835   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 22:21:36.452666   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 22:21:36.474354   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 22:21:36.496347   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 22:21:36.518192   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:21:36.539742   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:21:36.561567   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:21:36.582936   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:21:36.605667   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /usr/share/ca-certificates/669272.pem (1708 bytes)
	I0223 22:21:36.627349   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:21:36.649138   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem --> /usr/share/ca-certificates/66927.pem (1338 bytes)
	I0223 22:21:36.670645   80620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 22:21:36.685674   80620 ssh_runner.go:195] Run: openssl version
	I0223 22:21:36.690629   80620 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0223 22:21:36.690924   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66927.pem && ln -fs /usr/share/ca-certificates/66927.pem /etc/ssl/certs/66927.pem"
	I0223 22:21:36.699754   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.703759   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.704095   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.704128   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66927.pem
	I0223 22:21:36.709182   80620 command_runner.go:130] > 51391683
	I0223 22:21:36.709238   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/66927.pem /etc/ssl/certs/51391683.0"
	I0223 22:21:36.718122   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669272.pem && ln -fs /usr/share/ca-certificates/669272.pem /etc/ssl/certs/669272.pem"
	I0223 22:21:36.726789   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730766   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730841   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.730885   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669272.pem
	I0223 22:21:36.735795   80620 command_runner.go:130] > 3ec20f2e
	I0223 22:21:36.736176   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/669272.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:21:36.745026   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:21:36.753682   80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757609   80620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757830   80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.757864   80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:21:36.762876   80620 command_runner.go:130] > b5213941
	I0223 22:21:36.762930   80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:21:36.771746   80620 kubeadm.go:401] StartCluster: {Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP:}
	I0223 22:21:36.771889   80620 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:21:36.795673   80620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 22:21:36.804158   80620 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0223 22:21:36.804177   80620 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0223 22:21:36.804208   80620 command_runner.go:130] > /var/lib/minikube/etcd:
	I0223 22:21:36.804223   80620 command_runner.go:130] > member
	I0223 22:21:36.804253   80620 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 22:21:36.804270   80620 kubeadm.go:633] restartCluster start
	I0223 22:21:36.804326   80620 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 22:21:36.812345   80620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:36.812718   80620 kubeconfig.go:135] verify returned: extract IP: "multinode-773885" does not appear in /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:36.812798   80620 kubeconfig.go:146] "multinode-773885" context is missing from /home/jenkins/minikube-integration/15909-59858/kubeconfig - will repair!
	I0223 22:21:36.813094   80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:36.813506   80620 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:36.813719   80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:21:36.814424   80620 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 22:21:36.814616   80620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 22:21:36.822391   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:36.822434   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:36.832386   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:37.333153   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:37.333231   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:37.344298   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:37.832833   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:37.832931   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:37.843863   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:38.333039   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:38.333157   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:38.344397   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:38.833335   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:38.833418   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:38.844307   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:39.332585   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:39.332660   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:39.343665   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:39.833274   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:39.833358   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:39.844484   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:40.332983   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:40.333065   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:40.344099   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:40.832657   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:40.832750   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:40.843615   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:41.333154   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:41.333245   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:41.344059   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:41.832619   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:41.832703   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:41.843654   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:42.333248   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:42.333328   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:42.344533   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:42.833157   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:42.833256   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:42.843975   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:43.333351   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:43.333418   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:43.344740   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:43.832562   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:43.832672   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:43.843659   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:44.333327   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:44.333407   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:44.344578   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:44.833173   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:44.833245   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:44.844332   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:45.332909   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:45.333037   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:45.344107   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:45.832647   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:45.832732   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:45.843986   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.332538   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.332617   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.343428   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.833367   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.833455   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.844521   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.844541   80620 api_server.go:165] Checking apiserver status ...
	I0223 22:21:46.844582   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 22:21:46.854411   80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 22:21:46.854446   80620 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 22:21:46.854455   80620 kubeadm.go:1120] stopping kube-system containers ...
	I0223 22:21:46.854520   80620 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:21:46.882631   80620 command_runner.go:130] > a31cf43457e0
	I0223 22:21:46.882655   80620 command_runner.go:130] > b83daa4cdd8d
	I0223 22:21:46.882661   80620 command_runner.go:130] > 75e472928e30
	I0223 22:21:46.882666   80620 command_runner.go:130] > 20f2e353f8d4
	I0223 22:21:46.882674   80620 command_runner.go:130] > f6b2b873cba9
	I0223 22:21:46.882682   80620 command_runner.go:130] > 6becaf5c8640
	I0223 22:21:46.882688   80620 command_runner.go:130] > a2a9a29b5a41
	I0223 22:21:46.882694   80620 command_runner.go:130] > f284ce294fa0
	I0223 22:21:46.882700   80620 command_runner.go:130] > 8d29ee663e61
	I0223 22:21:46.882707   80620 command_runner.go:130] > baad115b76c6
	I0223 22:21:46.882725   80620 command_runner.go:130] > 53723346fe3c
	I0223 22:21:46.882735   80620 command_runner.go:130] > 6a41aad93299
	I0223 22:21:46.882743   80620 command_runner.go:130] > 745d6ec7adf4
	I0223 22:21:46.882750   80620 command_runner.go:130] > 979e703c6176
	I0223 22:21:46.882757   80620 command_runner.go:130] > 3b6e6d975efa
	I0223 22:21:46.882766   80620 command_runner.go:130] > 072b5f08a10f
	I0223 22:21:46.882797   80620 docker.go:456] Stopping containers: [a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f]
	I0223 22:21:46.882868   80620 ssh_runner.go:195] Run: docker stop a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f
	I0223 22:21:46.908823   80620 command_runner.go:130] > a31cf43457e0
	I0223 22:21:46.908844   80620 command_runner.go:130] > b83daa4cdd8d
	I0223 22:21:46.908853   80620 command_runner.go:130] > 75e472928e30
	I0223 22:21:46.908858   80620 command_runner.go:130] > 20f2e353f8d4
	I0223 22:21:46.908865   80620 command_runner.go:130] > f6b2b873cba9
	I0223 22:21:46.908870   80620 command_runner.go:130] > 6becaf5c8640
	I0223 22:21:46.908876   80620 command_runner.go:130] > a2a9a29b5a41
	I0223 22:21:46.909404   80620 command_runner.go:130] > f284ce294fa0
	I0223 22:21:46.909419   80620 command_runner.go:130] > 8d29ee663e61
	I0223 22:21:46.909424   80620 command_runner.go:130] > baad115b76c6
	I0223 22:21:46.909441   80620 command_runner.go:130] > 53723346fe3c
	I0223 22:21:46.909828   80620 command_runner.go:130] > 6a41aad93299
	I0223 22:21:46.909847   80620 command_runner.go:130] > 745d6ec7adf4
	I0223 22:21:46.909853   80620 command_runner.go:130] > 979e703c6176
	I0223 22:21:46.909858   80620 command_runner.go:130] > 3b6e6d975efa
	I0223 22:21:46.909864   80620 command_runner.go:130] > 072b5f08a10f
	I0223 22:21:46.911025   80620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 22:21:46.925825   80620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 22:21:46.933780   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 22:21:46.933807   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 22:21:46.933818   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 22:21:46.933842   80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:21:46.934068   80620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:21:46.934127   80620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 22:21:46.942292   80620 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 22:21:46.942311   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.060140   80620 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:21:47.060421   80620 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 22:21:47.060722   80620 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 22:21:47.061266   80620 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 22:21:47.061579   80620 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0223 22:21:47.062097   80620 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0223 22:21:47.062730   80620 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0223 22:21:47.063273   80620 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0223 22:21:47.063668   80620 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0223 22:21:47.064166   80620 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 22:21:47.064500   80620 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 22:21:47.064789   80620 command_runner.go:130] > [certs] Using the existing "sa" key
	I0223 22:21:47.066082   80620 command_runner.go:130] ! W0223 22:21:47.003599    1259 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.066190   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.118462   80620 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:21:47.207705   80620 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:21:47.310176   80620 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:21:47.491530   80620 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:21:47.570853   80620 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:21:47.573364   80620 command_runner.go:130] ! W0223 22:21:47.061082    1265 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.573502   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.637325   80620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:21:47.638644   80620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:21:47.638664   80620 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:21:47.751602   80620 command_runner.go:130] ! W0223 22:21:47.567753    1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.751640   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.811937   80620 command_runner.go:130] ! W0223 22:21:47.761774    1293 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.829349   80620 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:21:47.829375   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:21:47.829384   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:21:47.829392   80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:21:47.829573   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:47.919203   80620 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:21:47.922916   80620 command_runner.go:130] ! W0223 22:21:47.858650    1302 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:47.923089   80620 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:21:47.923171   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:48.438055   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:48.938524   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:49.437773   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:49.938504   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:50.438625   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:21:50.455679   80620 command_runner.go:130] > 1675
	I0223 22:21:50.456038   80620 api_server.go:71] duration metric: took 2.532952682s to wait for apiserver process to appear ...
	I0223 22:21:50.456061   80620 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:21:50.456073   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:50.456563   80620 api_server.go:268] stopped: https://192.168.39.240:8443/healthz: Get "https://192.168.39.240:8443/healthz": dial tcp 192.168.39.240:8443: connect: connection refused
	I0223 22:21:50.957285   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:53.851413   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 22:21:53.851440   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 22:21:53.957622   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:53.962959   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:53.962996   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:54.457567   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:54.462593   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:54.462613   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:54.957140   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:54.975573   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 22:21:54.975619   80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 22:21:55.457159   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:21:55.468052   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:21:55.468134   80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0223 22:21:55.468145   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:55.468159   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:55.468173   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:55.478605   80620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0223 22:21:55.478631   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:55.478639   80620 round_trippers.go:580]     Content-Length: 263
	I0223 22:21:55.478645   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:55 GMT
	I0223 22:21:55.478651   80620 round_trippers.go:580]     Audit-Id: 0e80152b-56d5-4ba7-8d3d-ebf4ef092ec4
	I0223 22:21:55.478656   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:55.478661   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:55.478667   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:55.478677   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:55.478720   80620 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:21:55.478820   80620 api_server.go:140] control plane version: v1.26.1
	I0223 22:21:55.478837   80620 api_server.go:130] duration metric: took 5.022769855s to wait for apiserver health ...
	I0223 22:21:55.478847   80620 cni.go:84] Creating CNI manager for ""
	I0223 22:21:55.478864   80620 cni.go:136] 3 nodes found, recommending kindnet
	I0223 22:21:55.481215   80620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 22:21:55.482654   80620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:21:55.487827   80620 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:21:55.487850   80620 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0223 22:21:55.487860   80620 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0223 22:21:55.487870   80620 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:21:55.487881   80620 command_runner.go:130] > Access: 2023-02-23 22:21:25.431985633 +0000
	I0223 22:21:55.487897   80620 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0223 22:21:55.487905   80620 command_runner.go:130] > Change: 2023-02-23 22:21:23.668985633 +0000
	I0223 22:21:55.487910   80620 command_runner.go:130] >  Birth: -
	I0223 22:21:55.488315   80620 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:21:55.488335   80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:21:55.519404   80620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:21:56.635297   80620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:21:56.642116   80620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:21:56.645709   80620 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 22:21:56.664280   80620 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 22:21:56.666573   80620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.147136699s)
	I0223 22:21:56.666612   80620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:21:56.666717   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:21:56.666728   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.666739   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.666748   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.670034   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:21:56.670049   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.670056   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.670062   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.670081   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.670087   80620 round_trippers.go:580]     Audit-Id: 03e54a77-0840-4896-9a52-5cdd73109000
	I0223 22:21:56.670100   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.670111   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.671358   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
	I0223 22:21:56.675255   80620 system_pods.go:59] 12 kube-system pods found
	I0223 22:21:56.675279   80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:21:56.675286   80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 22:21:56.675291   80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:21:56.675295   80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:21:56.675316   80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:21:56.675325   80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:21:56.675337   80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 22:21:56.675345   80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:21:56.675349   80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:21:56.675356   80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:21:56.675361   80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 22:21:56.675367   80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:21:56.675372   80620 system_pods.go:74] duration metric: took 8.754325ms to wait for pod list to return data ...
	I0223 22:21:56.675385   80620 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:21:56.675430   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0223 22:21:56.675437   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.675444   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.675451   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.680543   80620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 22:21:56.680557   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.680564   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.680569   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.680577   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.680582   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.680589   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.680597   80620 round_trippers.go:580]     Audit-Id: e86d112e-250e-4963-a6fb-b8fd3c902f59
	I0223 22:21:56.681128   80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16319 chars]
	I0223 22:21:56.681878   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681909   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681918   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681922   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681926   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:21:56.681932   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:21:56.681938   80620 node_conditions.go:105] duration metric: took 6.549163ms to run NodePressure ...
	I0223 22:21:56.681958   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 22:21:56.825426   80620 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 22:21:56.885114   80620 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 22:21:56.886787   80620 command_runner.go:130] ! W0223 22:21:56.690228    2212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:21:56.886832   80620 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 22:21:56.886942   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0223 22:21:56.886954   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.886965   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.886975   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.889503   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.889525   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.889536   80620 round_trippers.go:580]     Audit-Id: a9179ace-0f8b-41d7-acc9-15a5468f5431
	I0223 22:21:56.889545   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.889552   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.889561   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.889569   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.889582   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.890569   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29273 chars]
	I0223 22:21:56.891994   80620 kubeadm.go:784] kubelet initialised
	I0223 22:21:56.892020   80620 kubeadm.go:785] duration metric: took 5.174392ms waiting for restarted kubelet to initialise ...
	I0223 22:21:56.892029   80620 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:21:56.892094   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:21:56.892105   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.892115   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.892126   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.898216   80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 22:21:56.898231   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.898240   80620 round_trippers.go:580]     Audit-Id: 0cbc9df8-5ddc-4405-a649-09747f9c7e5c
	I0223 22:21:56.898250   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.898260   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.898268   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.898280   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.898290   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.899125   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
	I0223 22:21:56.901600   80620 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.901668   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:21:56.901680   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.901690   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.901697   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.906528   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:21:56.906543   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.906552   80620 round_trippers.go:580]     Audit-Id: c55b1693-f442-4306-a674-87f938885743
	I0223 22:21:56.906561   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.906571   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.906580   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.906589   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.906602   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.906875   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:21:56.907276   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.907287   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.907294   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.907312   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.916593   80620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0223 22:21:56.916608   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.916616   80620 round_trippers.go:580]     Audit-Id: 3b9497a6-fa4c-472e-b004-b0b6906e7a7f
	I0223 22:21:56.916625   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.916634   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.916644   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.916652   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.916662   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.916802   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.917117   80620 pod_ready.go:97] node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.917132   80620 pod_ready.go:81] duration metric: took 15.512217ms waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.917139   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.917145   80620 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.917197   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
	I0223 22:21:56.917206   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.917213   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.917219   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.919079   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.919091   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.919097   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.919103   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.919108   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.919114   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.919120   80620 round_trippers.go:580]     Audit-Id: 143d00d2-5e6b-44b2-a517-c658e2dc5a9f
	I0223 22:21:56.919129   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.919346   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6289 chars]
	I0223 22:21:56.919779   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.919793   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.919802   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.919808   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.921391   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.921406   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.921413   80620 round_trippers.go:580]     Audit-Id: 9f5eac9e-078a-4143-9d6d-1b1de0a3102a
	I0223 22:21:56.921423   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.921431   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.921440   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.921450   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.921460   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.921618   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.921957   80620 pod_ready.go:97] node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.921972   80620 pod_ready.go:81] duration metric: took 4.821003ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.921981   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.921998   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.922055   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
	I0223 22:21:56.922065   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.922076   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.922089   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.925010   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.925024   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.925033   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.925043   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.925052   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.925061   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.925070   80620 round_trippers.go:580]     Audit-Id: 422d48f0-48d6-4c16-8b22-40f26357fc34
	I0223 22:21:56.925075   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.925261   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"282","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0223 22:21:56.925639   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:56.925652   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.925659   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.925666   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.927337   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:21:56.927356   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.927365   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.927373   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.927382   80620 round_trippers.go:580]     Audit-Id: 020b9a46-ef43-4607-90e4-5d3e9e7d1a08
	I0223 22:21:56.927392   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.927401   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.927413   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.927579   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:56.927921   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.927940   80620 pod_ready.go:81] duration metric: took 5.928725ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:56.927950   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:56.927957   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:56.928048   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
	I0223 22:21:56.928062   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:56.928072   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:56.928082   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:56.930936   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:56.930950   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:56.930956   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:56.930961   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:56.930968   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:56.930982   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:56 GMT
	I0223 22:21:56.930995   80620 round_trippers.go:580]     Audit-Id: 00aa01ac-5a84-4085-b3b5-f5f6d06fbe47
	I0223 22:21:56.931005   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:56.931218   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"739","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7424 chars]
	I0223 22:21:57.067070   80620 request.go:622] Waited for 135.338555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.067135   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.067145   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.067163   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.067176   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.070119   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.070137   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.070143   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.070149   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.070155   80620 round_trippers.go:580]     Audit-Id: 5d3402dd-3874-4131-9278-561b1ef77762
	I0223 22:21:57.070161   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.070167   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.070178   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.070297   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:57.070668   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.070691   80620 pod_ready.go:81] duration metric: took 142.727116ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:57.070704   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.070713   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.267166   80620 request.go:622] Waited for 196.388978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:21:57.267229   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:21:57.267239   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.267252   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.267264   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.269968   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.269991   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.270000   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.270012   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.270084   80620 round_trippers.go:580]     Audit-Id: 27049171-e30c-4ab9-a6ed-77da398a4856
	I0223 22:21:57.270104   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.270113   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.270123   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.270261   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0223 22:21:57.467146   80620 request.go:622] Waited for 196.375195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:21:57.467201   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:21:57.467207   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.467216   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.467235   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.469655   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.469680   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.469690   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.469716   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.469727   80620 round_trippers.go:580]     Audit-Id: d420f22f-77bb-4122-826c-40660cb2d6fb
	I0223 22:21:57.469734   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.469741   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.469749   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.469921   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0223 22:21:57.470230   80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
	I0223 22:21:57.470242   80620 pod_ready.go:81] duration metric: took 399.521519ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.470250   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:57.667697   80620 request.go:622] Waited for 197.385632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:21:57.667766   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:21:57.667771   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.667778   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.667785   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.670278   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.670298   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.670308   80620 round_trippers.go:580]     Audit-Id: 0128213a-339a-470c-989d-e7b486abebe1
	I0223 22:21:57.670316   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.670324   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.670333   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.670342   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.670351   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.670879   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"377","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 22:21:57.867695   80620 request.go:622] Waited for 196.388162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.867765   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:57.867770   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:57.867778   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:57.867784   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:57.870409   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:57.870431   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:57.870442   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:57.870452   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:57.870460   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:57.870466   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:57 GMT
	I0223 22:21:57.870474   80620 round_trippers.go:580]     Audit-Id: a53d6f4e-2730-4846-9147-87d2b5b1bc56
	I0223 22:21:57.870483   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:57.870627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:57.870935   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.870951   80620 pod_ready.go:81] duration metric: took 400.694245ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:57.870962   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:57.870970   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.067390   80620 request.go:622] Waited for 196.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:21:58.067527   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:21:58.067575   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.067593   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.067604   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.071162   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:21:58.071181   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.071191   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.071199   80620 round_trippers.go:580]     Audit-Id: 49f82db0-63aa-4950-9457-03eeb73d1c6f
	I0223 22:21:58.071207   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.071215   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.071223   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.071231   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.071517   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0223 22:21:58.267044   80620 request.go:622] Waited for 195.100843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:21:58.267131   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:21:58.267138   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.267150   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.267161   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.269786   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.269805   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.269812   80620 round_trippers.go:580]     Audit-Id: 28398178-6b4f-4ced-bd50-76b0a4e432c0
	I0223 22:21:58.269818   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.269823   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.269828   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.269833   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.269846   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.270022   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
	I0223 22:21:58.270353   80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
	I0223 22:21:58.270367   80620 pod_ready.go:81] duration metric: took 399.384993ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.270378   80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:21:58.467272   80620 request.go:622] Waited for 196.812846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:21:58.467358   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:21:58.467365   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.467376   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.467390   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.470141   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.470169   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.470179   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.470188   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.470195   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.470204   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.470213   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.470221   80620 round_trippers.go:580]     Audit-Id: e5044b8f-aa40-4729-93fe-c25c71ca551c
	I0223 22:21:58.470349   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"742","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5136 chars]
	I0223 22:21:58.667199   80620 request.go:622] Waited for 196.342723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.667264   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.667275   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.667288   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.667318   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.669825   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.669849   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.669860   80620 round_trippers.go:580]     Audit-Id: 8c1fc862-a3d1-4b08-b8c2-f41fa6fd3cd6
	I0223 22:21:58.669869   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.669877   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.669885   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.669899   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.669910   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.670129   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:58.670496   80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:58.670517   80620 pod_ready.go:81] duration metric: took 400.130245ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	E0223 22:21:58.670528   80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
	I0223 22:21:58.670539   80620 pod_ready.go:38] duration metric: took 1.778499138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:21:58.670563   80620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 22:21:58.684600   80620 command_runner.go:130] > -16
	I0223 22:21:58.684633   80620 ops.go:34] apiserver oom_adj: -16
	I0223 22:21:58.684642   80620 kubeadm.go:637] restartCluster took 21.880365731s
	I0223 22:21:58.684651   80620 kubeadm.go:403] StartCluster complete in 21.912911073s
	I0223 22:21:58.684672   80620 settings.go:142] acquiring lock: {Name:mk906211444ec0c60982da29f94c92fb57d72ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:58.684774   80620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:58.685563   80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:21:58.685892   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 22:21:58.686005   80620 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0223 22:21:58.686136   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:21:58.686171   80620 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:21:58.687964   80620 out.go:177] * Enabled addons: 
	I0223 22:21:58.686508   80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:21:58.689318   80620 addons.go:492] enable addons completed in 3.316295ms: enabled=[]
	I0223 22:21:58.689636   80620 round_trippers.go:463] GET https://192.168.39.240:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:21:58.689653   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.689665   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.689674   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.692405   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.692425   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.692435   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.692448   80620 round_trippers.go:580]     Audit-Id: 2916b551-1504-4ee6-8f0b-8bb9b49c72fe
	I0223 22:21:58.692457   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.692474   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.692486   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.692499   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.692512   80620 round_trippers.go:580]     Content-Length: 291
	I0223 22:21:58.692541   80620 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88095e59-4c47-4f2e-9af0-397e7cc508de","resourceVersion":"743","creationTimestamp":"2023-02-23T22:17:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:21:58.692706   80620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773885" context rescaled to 1 replicas
	I0223 22:21:58.692739   80620 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:21:58.694468   80620 out.go:177] * Verifying Kubernetes components...
	I0223 22:21:58.696081   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:21:58.815357   80620 command_runner.go:130] > apiVersion: v1
	I0223 22:21:58.815388   80620 command_runner.go:130] > data:
	I0223 22:21:58.815395   80620 command_runner.go:130] >   Corefile: |
	I0223 22:21:58.815401   80620 command_runner.go:130] >     .:53 {
	I0223 22:21:58.815406   80620 command_runner.go:130] >         log
	I0223 22:21:58.815414   80620 command_runner.go:130] >         errors
	I0223 22:21:58.815423   80620 command_runner.go:130] >         health {
	I0223 22:21:58.815430   80620 command_runner.go:130] >            lameduck 5s
	I0223 22:21:58.815435   80620 command_runner.go:130] >         }
	I0223 22:21:58.815443   80620 command_runner.go:130] >         ready
	I0223 22:21:58.815455   80620 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 22:21:58.815461   80620 command_runner.go:130] >            pods insecure
	I0223 22:21:58.815470   80620 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 22:21:58.815479   80620 command_runner.go:130] >            ttl 30
	I0223 22:21:58.815485   80620 command_runner.go:130] >         }
	I0223 22:21:58.815495   80620 command_runner.go:130] >         prometheus :9153
	I0223 22:21:58.815501   80620 command_runner.go:130] >         hosts {
	I0223 22:21:58.815510   80620 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0223 22:21:58.815517   80620 command_runner.go:130] >            fallthrough
	I0223 22:21:58.815526   80620 command_runner.go:130] >         }
	I0223 22:21:58.815537   80620 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 22:21:58.815545   80620 command_runner.go:130] >            max_concurrent 1000
	I0223 22:21:58.815553   80620 command_runner.go:130] >         }
	I0223 22:21:58.815563   80620 command_runner.go:130] >         cache 30
	I0223 22:21:58.815574   80620 command_runner.go:130] >         loop
	I0223 22:21:58.815583   80620 command_runner.go:130] >         reload
	I0223 22:21:58.815595   80620 command_runner.go:130] >         loadbalance
	I0223 22:21:58.815605   80620 command_runner.go:130] >     }
	I0223 22:21:58.815614   80620 command_runner.go:130] > kind: ConfigMap
	I0223 22:21:58.815623   80620 command_runner.go:130] > metadata:
	I0223 22:21:58.815631   80620 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:17:37Z"
	I0223 22:21:58.815641   80620 command_runner.go:130] >   name: coredns
	I0223 22:21:58.815651   80620 command_runner.go:130] >   namespace: kube-system
	I0223 22:21:58.815660   80620 command_runner.go:130] >   resourceVersion: "360"
	I0223 22:21:58.815671   80620 command_runner.go:130] >   uid: 79632023-f720-4e05-a063-411c24789887
	I0223 22:21:58.818640   80620 node_ready.go:35] waiting up to 6m0s for node "multinode-773885" to be "Ready" ...
	I0223 22:21:58.818784   80620 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 22:21:58.866997   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:58.867022   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:58.867036   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:58.867046   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:58.869514   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:58.869542   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:58.869553   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:58.869562   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:58.869568   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:58.869573   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:58 GMT
	I0223 22:21:58.869579   80620 round_trippers.go:580]     Audit-Id: ef8ca951-03a3-4673-b3b0-d6e949e3aba1
	I0223 22:21:58.869586   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:58.869696   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:59.370801   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:59.370828   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:59.370840   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:59.370850   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:59.373237   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:59.373263   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:59.373275   80620 round_trippers.go:580]     Audit-Id: cc5c5f53-65a1-48f1-8d30-2983a96a1517
	I0223 22:21:59.373284   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:59.373292   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:59.373301   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:59.373310   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:59.373320   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:59 GMT
	I0223 22:21:59.373432   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:21:59.871104   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:21:59.871130   80620 round_trippers.go:469] Request Headers:
	I0223 22:21:59.871142   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:21:59.871152   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:21:59.873824   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:21:59.873849   80620 round_trippers.go:577] Response Headers:
	I0223 22:21:59.873860   80620 round_trippers.go:580]     Audit-Id: a0c12052-13ba-4532-b2cb-ef0712468e2c
	I0223 22:21:59.873868   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:21:59.873877   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:21:59.873890   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:21:59.873898   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:21:59.873910   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:21:59 GMT
	I0223 22:21:59.874344   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.371108   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:00.371138   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:00.371150   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:00.371160   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:00.373796   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:00.373818   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:00.373826   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:00.373832   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:00.373837   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:00.373843   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:00 GMT
	I0223 22:22:00.373849   80620 round_trippers.go:580]     Audit-Id: 6d76f1af-c5ab-44d4-ac95-d4a732c54af0
	I0223 22:22:00.373861   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:00.374155   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.870897   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:00.870933   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:00.870942   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:00.870951   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:00.873427   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:00.873451   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:00.873462   80620 round_trippers.go:580]     Audit-Id: 494f6db1-2d29-4a14-be25-f5115f464c6c
	I0223 22:22:00.873471   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:00.873485   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:00.873495   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:00.873504   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:00.873512   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:00 GMT
	I0223 22:22:00.873654   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:00.874130   80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
	I0223 22:22:01.370246   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:01.370268   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:01.370279   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:01.370286   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:01.372742   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:01.372768   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:01.372779   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:01.372787   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:01.372796   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:01.372808   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:01 GMT
	I0223 22:22:01.372816   80620 round_trippers.go:580]     Audit-Id: d657d94b-1177-4e47-9c6a-10517add9c29
	I0223 22:22:01.372827   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:01.372974   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:01.870635   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:01.870664   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:01.870672   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:01.870679   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:01.873350   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:01.873373   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:01.873386   80620 round_trippers.go:580]     Audit-Id: 3aae1eee-a094-424f-bbd3-1cc775206a05
	I0223 22:22:01.873395   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:01.873403   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:01.873410   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:01.873419   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:01.873428   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:01 GMT
	I0223 22:22:01.873701   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.370356   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:02.370378   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:02.370386   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:02.370392   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:02.373961   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:02.373983   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:02.373992   80620 round_trippers.go:580]     Audit-Id: 2d8ae255-30e7-495f-82a8-f977058510be
	I0223 22:22:02.374000   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:02.374008   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:02.374018   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:02.374028   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:02.374041   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:02 GMT
	I0223 22:22:02.374362   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.871107   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:02.871133   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:02.871148   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:02.871157   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:02.873653   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:02.873672   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:02.873680   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:02.873686   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:02.873691   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:02 GMT
	I0223 22:22:02.873697   80620 round_trippers.go:580]     Audit-Id: 88e3a2a0-3a44-456c-a122-9443f9691153
	I0223 22:22:02.873706   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:02.873715   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:02.874022   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:02.874437   80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
	I0223 22:22:03.370842   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:03.370869   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:03.370886   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:03.370894   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:03.372889   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:03.372909   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:03.372916   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:03.372922   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:03 GMT
	I0223 22:22:03.372928   80620 round_trippers.go:580]     Audit-Id: 553e23aa-d7b4-4f46-b968-491b3c19b7a9
	I0223 22:22:03.372934   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:03.372942   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:03.372954   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:03.373055   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:03.870742   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:03.870764   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:03.870773   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:03.870779   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:03.873449   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:03.873469   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:03.873476   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:03.873482   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:03.873487   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:03 GMT
	I0223 22:22:03.873493   80620 round_trippers.go:580]     Audit-Id: d10ccbbb-11df-43ab-9526-c648f4eb57ab
	I0223 22:22:03.873499   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:03.873504   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:03.873699   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:04.370303   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.370324   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.370332   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.370339   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.372813   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.372839   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.372851   80620 round_trippers.go:580]     Audit-Id: bdad9e22-9644-4e1c-8f6c-ae6fc5d4caf1
	I0223 22:22:04.372861   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.372870   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.372879   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.372893   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.372902   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.373649   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
	I0223 22:22:04.870293   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.870319   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.870327   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.870333   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.873111   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.873137   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.873148   80620 round_trippers.go:580]     Audit-Id: 356034ea-3c99-4375-a746-070c2cc9db4c
	I0223 22:22:04.873157   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.873164   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.873172   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.873182   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.873192   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.873417   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:04.873740   80620 node_ready.go:49] node "multinode-773885" has status "Ready":"True"
	I0223 22:22:04.873759   80620 node_ready.go:38] duration metric: took 6.055088164s waiting for node "multinode-773885" to be "Ready" ...
	I0223 22:22:04.873768   80620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:22:04.873821   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:04.873828   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.873836   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.873842   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.877171   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:04.877190   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.877199   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.877209   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.877217   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.877225   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.877234   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.877242   80620 round_trippers.go:580]     Audit-Id: ea2e3ce7-5ec8-4de8-affe-00217b9f0f75
	I0223 22:22:04.878185   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"788"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83657 chars]
	I0223 22:22:04.880661   80620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:04.880721   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:04.880729   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.880736   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.880743   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.882620   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:04.882637   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.882643   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.882649   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.882654   80620 round_trippers.go:580]     Audit-Id: b8c34b52-e089-4d20-abac-792cd26a154e
	I0223 22:22:04.882660   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.882665   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.882671   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.882780   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:04.883130   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:04.883141   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:04.883148   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:04.883154   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:04.885545   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:04.885559   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:04.885566   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:04.885571   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:04.885577   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:04.885582   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:04 GMT
	I0223 22:22:04.885590   80620 round_trippers.go:580]     Audit-Id: a935859f-b8a0-4ddc-8ffe-b88f374b4617
	I0223 22:22:04.885597   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:04.885668   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:05.386735   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:05.386762   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.386775   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.386785   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.389024   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:05.389044   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.389055   80620 round_trippers.go:580]     Audit-Id: 5162732a-6a2d-4976-bd1a-d7a30dbd6874
	I0223 22:22:05.389063   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.389070   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.389082   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.389095   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.389103   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.389223   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:05.389693   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:05.389706   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.389713   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.389722   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.391445   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:05.391462   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.391469   80620 round_trippers.go:580]     Audit-Id: 152ffe10-665f-45a2-8a81-8746544ba57e
	I0223 22:22:05.391475   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.391482   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.391491   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.391501   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.391511   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.391627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:05.886225   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:05.886248   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.886257   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.886264   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.888353   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:05.888389   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.888399   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.888408   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.888417   80620 round_trippers.go:580]     Audit-Id: cc5f0143-2508-446f-907a-56ab533f7430
	I0223 22:22:05.888426   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.888438   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.888446   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.889024   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:05.889458   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:05.889469   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:05.889476   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:05.889484   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:05.891242   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:05.891257   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:05.891263   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:05.891269   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:05.891275   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:05.891283   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:05.891293   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:05 GMT
	I0223 22:22:05.891319   80620 round_trippers.go:580]     Audit-Id: ee3b00fc-914b-4eba-8a45-e4597d8f6d25
	I0223 22:22:05.891627   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.386281   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:06.386303   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.386311   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.386326   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.388974   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:06.388992   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.388999   80620 round_trippers.go:580]     Audit-Id: 220c9abc-71ea-4bf1-984a-8b6e023377f1
	I0223 22:22:06.389014   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.389026   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.389038   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.389046   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.389052   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.389842   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:06.390308   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:06.390321   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.390328   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.390337   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.391935   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:06.391953   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.391962   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.391970   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.391980   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.391989   80620 round_trippers.go:580]     Audit-Id: 7685b789-c707-4d17-88af-7145585bce78
	I0223 22:22:06.391998   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.392010   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.392362   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.886127   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:06.886150   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.886159   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.886165   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.889975   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:06.890001   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.890013   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.890023   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.890035   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.890048   80620 round_trippers.go:580]     Audit-Id: 87848966-24d5-45b3-a7aa-56f65410f508
	I0223 22:22:06.890057   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.890070   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.890267   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:06.890721   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:06.890734   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:06.890741   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:06.890747   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:06.895655   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:06.895674   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:06.895684   80620 round_trippers.go:580]     Audit-Id: f054bb7d-1199-4b8d-b3f0-4c0274f1d63d
	I0223 22:22:06.895693   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:06.895702   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:06.895713   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:06.895724   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:06.895736   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:06 GMT
	I0223 22:22:06.896139   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:06.896420   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:07.386841   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:07.386862   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.386871   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.386878   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.389998   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:07.390025   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.390036   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.390046   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.390054   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.390062   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.390070   80620 round_trippers.go:580]     Audit-Id: d6b7ea92-112f-499d-a61b-86d8245e8558
	I0223 22:22:07.390078   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.390244   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:07.390679   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:07.390690   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.390698   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.390704   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.392927   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:07.392948   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.392958   80620 round_trippers.go:580]     Audit-Id: e7498617-1172-42fd-b07a-d2d628e52a21
	I0223 22:22:07.392969   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.392988   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.393002   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.393011   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.393022   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.393607   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:07.886231   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:07.886254   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.886277   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.886284   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.889328   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:07.889351   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.889359   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.889366   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.889371   80620 round_trippers.go:580]     Audit-Id: 996a8d26-ab61-4eb1-a206-c0fb32514e06
	I0223 22:22:07.889377   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.889382   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.889388   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.889970   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:07.890413   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:07.890425   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:07.890432   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:07.890439   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:07.897920   80620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0223 22:22:07.897934   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:07.897941   80620 round_trippers.go:580]     Audit-Id: 4221b7db-ff10-4443-aed5-78c6f7b9296c
	I0223 22:22:07.897947   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:07.897953   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:07.897958   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:07.897966   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:07.897972   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:07 GMT
	I0223 22:22:07.898379   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:08.386191   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:08.386213   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.386224   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.386234   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.388618   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.388637   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.388644   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.388652   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.388660   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.388668   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.388689   80620 round_trippers.go:580]     Audit-Id: 9fd3f354-aaea-4470-b0a9-a62bb9cf4b81
	I0223 22:22:08.388695   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.389016   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:08.389462   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:08.389474   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.389484   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.389493   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.391347   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:08.391366   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.391376   80620 round_trippers.go:580]     Audit-Id: d2b922bc-cc07-4d6a-a919-5b81247f7675
	I0223 22:22:08.391385   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.391396   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.391405   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.391414   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.391419   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.391692   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:08.886358   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:08.886387   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.886397   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.886403   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.889174   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.889200   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.889209   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.889215   80620 round_trippers.go:580]     Audit-Id: 7d35bf13-e46b-4b70-b379-eef2287d1352
	I0223 22:22:08.889220   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.889226   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.889231   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.889236   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.889437   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:08.889910   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:08.889923   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:08.889931   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:08.889937   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:08.892893   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:08.892908   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:08.892914   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:08 GMT
	I0223 22:22:08.892919   80620 round_trippers.go:580]     Audit-Id: c156c99d-e130-4f55-b4e3-14616a7ba70f
	I0223 22:22:08.892927   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:08.892936   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:08.892945   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:08.892956   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:08.893597   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:09.386240   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:09.386263   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.386272   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.386278   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.388959   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.388983   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.388991   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.388997   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.389002   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.389007   80620 round_trippers.go:580]     Audit-Id: b1b9610c-e081-4bbb-837e-8be581f68475
	I0223 22:22:09.389013   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.389018   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.389296   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:09.389849   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:09.389877   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.389888   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.389895   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.391871   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:09.391888   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.391895   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.391900   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.391906   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.391911   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.391916   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.391930   80620 round_trippers.go:580]     Audit-Id: 002294de-1a26-4570-886e-0a7800195800
	I0223 22:22:09.392074   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:09.392445   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:09.886775   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:09.886796   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.886805   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.886812   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.889680   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.889703   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.889710   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.889716   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.889722   80620 round_trippers.go:580]     Audit-Id: 3a94f330-f28f-46c4-a648-51998b06aed1
	I0223 22:22:09.889730   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.889740   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.889749   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.889960   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:09.890412   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:09.890426   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:09.890433   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:09.890439   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:09.893112   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:09.893124   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:09.893131   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:09 GMT
	I0223 22:22:09.893136   80620 round_trippers.go:580]     Audit-Id: f1b19073-36ac-4a4c-b6c5-aa4b69ec1776
	I0223 22:22:09.893141   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:09.893148   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:09.893156   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:09.893165   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:09.893436   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:10.386076   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:10.386100   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.386109   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.386115   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.388462   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:10.388484   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.388491   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.388497   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.388502   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.388508   80620 round_trippers.go:580]     Audit-Id: b0c0f970-513c-4958-8f0f-9012dbfa36d5
	I0223 22:22:10.388513   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.388518   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.388755   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0223 22:22:10.389295   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:10.389312   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.389323   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.389333   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.391529   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:10.391550   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.391560   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.391568   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.391574   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.391582   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.391587   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.391593   80620 round_trippers.go:580]     Audit-Id: 10261026-5803-485c-834a-bf21f0cb79e3
	I0223 22:22:10.391676   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:10.886276   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:10.886298   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.886310   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.886319   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.890190   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:10.890215   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.890222   80620 round_trippers.go:580]     Audit-Id: b6386ff9-de93-4709-b3ef-d903d0d5a9cc
	I0223 22:22:10.890228   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.890234   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.890239   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.890245   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.890251   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.890402   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0223 22:22:10.890869   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:10.890883   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:10.890893   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:10.890902   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:10.895016   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:10.895035   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:10.895046   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:10.895055   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:10.895064   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:10.895073   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:10 GMT
	I0223 22:22:10.895080   80620 round_trippers.go:580]     Audit-Id: 2e664d84-586c-4ab6-94bc-ba77835a654d
	I0223 22:22:10.895085   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:10.895436   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.386154   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:11.386182   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.386193   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.386202   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.388774   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.388795   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.388805   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.388814   80620 round_trippers.go:580]     Audit-Id: 0b53d934-8f77-4a2f-bbe6-92be4d3d5c17
	I0223 22:22:11.388822   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.388831   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.388848   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.388858   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.389048   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0223 22:22:11.389509   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.389522   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.389532   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.389541   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.391436   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:11.391458   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.391475   80620 round_trippers.go:580]     Audit-Id: f0d5469c-1828-43e0-99ac-880d59c5ca18
	I0223 22:22:11.391486   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.391496   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.391502   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.391508   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.391514   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.392144   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.392489   80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
	I0223 22:22:11.886705   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
	I0223 22:22:11.886728   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.886740   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.886747   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.897949   80620 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0223 22:22:11.897972   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.897979   80620 round_trippers.go:580]     Audit-Id: ee3fad82-cb14-466d-be80-d787cdfe18c6
	I0223 22:22:11.897988   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.897996   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.898005   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.898014   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.898023   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.898203   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0223 22:22:11.898695   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.898709   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.898716   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.898722   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.901522   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.901537   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.901546   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.901555   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.901565   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.901574   80620 round_trippers.go:580]     Audit-Id: 67ab3f98-4824-4d37-9baa-d6fde6241cd3
	I0223 22:22:11.901583   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.901592   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.901884   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.902261   80620 pod_ready.go:92] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.902281   80620 pod_ready.go:81] duration metric: took 7.021599209s waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.902292   80620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.902345   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
	I0223 22:22:11.902362   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.902374   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.902387   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.905539   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:11.905555   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.905564   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.905573   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.905584   80620 round_trippers.go:580]     Audit-Id: b11ef536-b4c5-482e-aa7c-76d59636d5d2
	I0223 22:22:11.905592   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.905600   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.905608   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.906366   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"802","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6065 chars]
	I0223 22:22:11.906856   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.906876   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.906892   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.906903   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.908814   80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:22:11.908827   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.908833   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.908838   80620 round_trippers.go:580]     Audit-Id: afa24933-99a3-4732-ab8c-89f796285545
	I0223 22:22:11.908844   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.908849   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.908860   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.908868   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.909140   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.909495   80620 pod_ready.go:92] pod "etcd-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.909509   80620 pod_ready.go:81] duration metric: took 7.209083ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.909528   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.909582   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
	I0223 22:22:11.909592   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.909603   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.909616   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.911700   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.911720   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.911729   80620 round_trippers.go:580]     Audit-Id: 779ea438-bd06-40b6-ba45-805cc766e96d
	I0223 22:22:11.911737   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.911745   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.911754   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.911762   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.911772   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.911987   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"793","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7599 chars]
	I0223 22:22:11.912445   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.912459   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.912475   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.912485   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.914590   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.914610   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.914619   80620 round_trippers.go:580]     Audit-Id: 05b9d526-86d7-43a1-a29b-8b19eb1394d1
	I0223 22:22:11.914628   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.914637   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.914659   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.914670   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.914685   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.914841   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.915184   80620 pod_ready.go:92] pod "kube-apiserver-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.915198   80620 pod_ready.go:81] duration metric: took 5.656927ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.915207   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.915261   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
	I0223 22:22:11.915271   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.915282   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.915294   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.917370   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.917390   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.917400   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.917407   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.917416   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.917424   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.917434   80620 round_trippers.go:580]     Audit-Id: 1c6ec0cd-a712-46c0-9127-fc5aaaf54dca
	I0223 22:22:11.917444   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.917666   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"825","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7162 chars]
	I0223 22:22:11.918056   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:11.918067   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.918078   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.918090   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.920329   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.920349   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.920359   80620 round_trippers.go:580]     Audit-Id: 4abce7c0-9628-4d94-8005-2a2dfc23a6e7
	I0223 22:22:11.920367   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.920377   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.920386   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.920394   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.920410   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.921292   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:11.921655   80620 pod_ready.go:92] pod "kube-controller-manager-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.921672   80620 pod_ready.go:81] duration metric: took 6.456858ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.921682   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.921744   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
	I0223 22:22:11.921759   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.921770   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.921788   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.923979   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.923999   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.924008   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.924016   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.924024   80620 round_trippers.go:580]     Audit-Id: 0efbb785-cf58-48c7-81ba-79e7df1fffe6
	I0223 22:22:11.924037   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.924045   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.924054   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.924324   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0223 22:22:11.924642   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
	I0223 22:22:11.924651   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:11.924659   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:11.924668   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:11.927145   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:11.927164   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:11.927174   80620 round_trippers.go:580]     Audit-Id: d525fadc-555c-4d29-8ba1-8f98e144287a
	I0223 22:22:11.927190   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:11.927201   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:11.927209   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:11.927221   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:11.927230   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:11 GMT
	I0223 22:22:11.927662   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0223 22:22:11.927907   80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:11.927917   80620 pod_ready.go:81] duration metric: took 6.229355ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:11.927924   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.087372   80620 request.go:622] Waited for 159.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:22:12.087472   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
	I0223 22:22:12.087484   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.087494   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.087506   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.090953   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.090975   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.090982   80620 round_trippers.go:580]     Audit-Id: d476c971-82f9-4e13-bf24-ac1d0a7e0132
	I0223 22:22:12.090988   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.091000   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.091015   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.091023   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.091034   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.091257   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"751","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I0223 22:22:12.287106   80620 request.go:622] Waited for 195.345935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:12.287171   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:12.287176   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.287184   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.287190   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.290450   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.290482   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.290493   80620 round_trippers.go:580]     Audit-Id: 293be0f3-4481-47c8-8397-f5bcd5d19b91
	I0223 22:22:12.290503   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.290511   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.290527   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.290541   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.290550   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.290685   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:12.290991   80620 pod_ready.go:92] pod "kube-proxy-mdjks" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:12.291002   80620 pod_ready.go:81] duration metric: took 363.073923ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.291011   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.487380   80620 request.go:622] Waited for 196.297867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:22:12.487451   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
	I0223 22:22:12.487455   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.487463   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.487470   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.490351   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:12.490369   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.490376   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.490382   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.490390   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.490396   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.490402   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.490408   80620 round_trippers.go:580]     Audit-Id: 3101849d-f3a0-4ede-99b6-2a380cea5ba6
	I0223 22:22:12.490636   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0223 22:22:12.687374   80620 request.go:622] Waited for 196.32053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:22:12.687452   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
	I0223 22:22:12.687458   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.687466   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.687472   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.690923   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:12.690945   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.690952   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.690958   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.690963   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.690969   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.690975   80620 round_trippers.go:580]     Audit-Id: f8604e33-edeb-42ae-8e19-5e27a6bd8d7d
	I0223 22:22:12.690980   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.693472   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
	I0223 22:22:12.693842   80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:12.693857   80620 pod_ready.go:81] duration metric: took 402.838971ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.693868   80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:12.886856   80620 request.go:622] Waited for 192.90851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:22:12.886917   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
	I0223 22:22:12.886932   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:12.886943   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:12.886952   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:12.893080   80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 22:22:12.893102   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:12.893109   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:12 GMT
	I0223 22:22:12.893115   80620 round_trippers.go:580]     Audit-Id: 854e2fd9-4c25-4b2f-bc59-61d21fabfb74
	I0223 22:22:12.893120   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:12.893125   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:12.893131   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:12.893136   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:12.893332   80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"786","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4892 chars]
	I0223 22:22:13.087065   80620 request.go:622] Waited for 193.332526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:13.087127   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
	I0223 22:22:13.087133   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.087143   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.087153   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.091144   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:13.091162   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.091169   80620 round_trippers.go:580]     Audit-Id: bf568af1-d7fc-4da0-9559-42a27fc0cef3
	I0223 22:22:13.091175   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.091181   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.091186   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.091198   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.091210   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.091630   80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
	I0223 22:22:13.091948   80620 pod_ready.go:92] pod "kube-scheduler-multinode-773885" in "kube-system" namespace has status "Ready":"True"
	I0223 22:22:13.091980   80620 pod_ready.go:81] duration metric: took 398.085634ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
	I0223 22:22:13.091998   80620 pod_ready.go:38] duration metric: took 8.218220101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:22:13.092020   80620 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:22:13.092066   80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:22:13.104775   80620 command_runner.go:130] > 1675
	I0223 22:22:13.104818   80620 api_server.go:71] duration metric: took 14.412044719s to wait for apiserver process to appear ...
	I0223 22:22:13.104835   80620 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:22:13.104847   80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:22:13.110111   80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:22:13.110176   80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0223 22:22:13.110187   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.110206   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.110217   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.110872   80620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0223 22:22:13.110888   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.110895   80620 round_trippers.go:580]     Audit-Id: 4f7ff6ce-bed0-47c2-918d-6dd15db9ce31
	I0223 22:22:13.110901   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.110906   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.110911   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.110918   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.110923   80620 round_trippers.go:580]     Content-Length: 263
	I0223 22:22:13.110930   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.110950   80620 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:22:13.111007   80620 api_server.go:140] control plane version: v1.26.1
	I0223 22:22:13.111018   80620 api_server.go:130] duration metric: took 6.177354ms to wait for apiserver health ...
	I0223 22:22:13.111024   80620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:22:13.287730   80620 request.go:622] Waited for 176.607463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.287780   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.287784   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.287794   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.287804   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.292061   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:13.292080   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.292087   80620 round_trippers.go:580]     Audit-Id: 8f903081-07eb-4386-b54e-2c988265836f
	I0223 22:22:13.292096   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.292104   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.292110   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.292116   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.292121   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.294183   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
	I0223 22:22:13.296686   80620 system_pods.go:59] 12 kube-system pods found
	I0223 22:22:13.296706   80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:22:13.296711   80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
	I0223 22:22:13.296715   80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:22:13.296719   80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:22:13.296723   80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:22:13.296727   80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:22:13.296731   80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
	I0223 22:22:13.296737   80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:22:13.296741   80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:22:13.296745   80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:22:13.296750   80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
	I0223 22:22:13.296754   80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:22:13.296759   80620 system_pods.go:74] duration metric: took 185.729884ms to wait for pod list to return data ...
	I0223 22:22:13.296768   80620 default_sa.go:34] waiting for default service account to be created ...
	I0223 22:22:13.487059   80620 request.go:622] Waited for 190.213748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:22:13.487142   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:22:13.487151   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.487163   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.487179   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.490660   80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:22:13.490686   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.490698   80620 round_trippers.go:580]     Content-Length: 261
	I0223 22:22:13.490707   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.490715   80620 round_trippers.go:580]     Audit-Id: b33f914f-7659-4fc8-8f76-26f7e677ba77
	I0223 22:22:13.490724   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.490733   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.490746   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.490755   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.490784   80620 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"860"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"62ac0740-2090-4217-a812-0d7ea88a967e","resourceVersion":"301","creationTimestamp":"2023-02-23T22:17:49Z"}}]}
	I0223 22:22:13.491028   80620 default_sa.go:45] found service account: "default"
	I0223 22:22:13.491048   80620 default_sa.go:55] duration metric: took 194.273065ms for default service account to be created ...
	I0223 22:22:13.491059   80620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 22:22:13.687553   80620 request.go:622] Waited for 196.395892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.687624   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0223 22:22:13.687630   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.687642   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.687659   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.691923   80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 22:22:13.691949   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.691960   80620 round_trippers.go:580]     Audit-Id: b99f1d26-3de6-4548-9948-e1ef63d9e02a
	I0223 22:22:13.691969   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.691980   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.691988   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.691997   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.692005   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.693522   80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
	I0223 22:22:13.695955   80620 system_pods.go:86] 12 kube-system pods found
	I0223 22:22:13.695978   80620 system_pods.go:89] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
	I0223 22:22:13.695985   80620 system_pods.go:89] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
	I0223 22:22:13.695993   80620 system_pods.go:89] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
	I0223 22:22:13.695999   80620 system_pods.go:89] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
	I0223 22:22:13.696005   80620 system_pods.go:89] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
	I0223 22:22:13.696012   80620 system_pods.go:89] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
	I0223 22:22:13.696020   80620 system_pods.go:89] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
	I0223 22:22:13.696028   80620 system_pods.go:89] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
	I0223 22:22:13.696040   80620 system_pods.go:89] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
	I0223 22:22:13.696048   80620 system_pods.go:89] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
	I0223 22:22:13.696055   80620 system_pods.go:89] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
	I0223 22:22:13.696061   80620 system_pods.go:89] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
	I0223 22:22:13.696071   80620 system_pods.go:126] duration metric: took 205.005964ms to wait for k8s-apps to be running ...
	I0223 22:22:13.696085   80620 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:22:13.696135   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:22:13.709623   80620 system_svc.go:56] duration metric: took 13.531533ms WaitForService to wait for kubelet.
	I0223 22:22:13.709679   80620 kubeadm.go:578] duration metric: took 15.016875282s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:22:13.709713   80620 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:22:13.887138   80620 request.go:622] Waited for 177.351024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0223 22:22:13.887250   80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0223 22:22:13.887261   80620 round_trippers.go:469] Request Headers:
	I0223 22:22:13.887269   80620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:22:13.887276   80620 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:22:13.889579   80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:22:13.889601   80620 round_trippers.go:577] Response Headers:
	I0223 22:22:13.889608   80620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
	I0223 22:22:13.889614   80620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
	I0223 22:22:13.889620   80620 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:13 GMT
	I0223 22:22:13.889625   80620 round_trippers.go:580]     Audit-Id: 4402b5a7-68c0-489c-bf87-bedbd28a14fe
	I0223 22:22:13.889631   80620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:22:13.889636   80620 round_trippers.go:580]     Content-Type: application/json
	I0223 22:22:13.889855   80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16192 chars]
	I0223 22:22:13.890436   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890455   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890468   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890474   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890481   80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 22:22:13.890489   80620 node_conditions.go:123] node cpu capacity is 2
	I0223 22:22:13.890496   80620 node_conditions.go:105] duration metric: took 180.777399ms to run NodePressure ...
	I0223 22:22:13.890512   80620 start.go:228] waiting for startup goroutines ...
	I0223 22:22:13.890522   80620 start.go:233] waiting for cluster config update ...
	I0223 22:22:13.890533   80620 start.go:242] writing updated cluster config ...
	I0223 22:22:13.890966   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:13.891077   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:13.893728   80620 out.go:177] * Starting worker node multinode-773885-m02 in cluster multinode-773885
	I0223 22:22:13.895212   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:22:13.895236   80620 cache.go:57] Caching tarball of preloaded images
	I0223 22:22:13.895333   80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:22:13.895345   80620 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:22:13.895468   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:13.895625   80620 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:22:13.895655   80620 start.go:364] acquiring machines lock for multinode-773885-m02: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 22:22:13.895705   80620 start.go:368] acquired machines lock for "multinode-773885-m02" in 30.081µs
	I0223 22:22:13.895724   80620 start.go:96] Skipping create...Using existing machine configuration
	I0223 22:22:13.895732   80620 fix.go:55] fixHost starting: m02
	I0223 22:22:13.896010   80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:22:13.896038   80620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:22:13.910341   80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0223 22:22:13.910796   80620 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:22:13.911318   80620 main.go:141] libmachine: Using API Version  1
	I0223 22:22:13.911343   80620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:22:13.911672   80620 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:22:13.911860   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:13.911979   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
	I0223 22:22:13.913566   80620 fix.go:103] recreateIfNeeded on multinode-773885-m02: state=Stopped err=<nil>
	I0223 22:22:13.913585   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	W0223 22:22:13.913746   80620 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 22:22:13.915708   80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885-m02" ...
	I0223 22:22:13.917009   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .Start
	I0223 22:22:13.917151   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring networks are active...
	I0223 22:22:13.917783   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network default is active
	I0223 22:22:13.918134   80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network mk-multinode-773885 is active
	I0223 22:22:13.918457   80620 main.go:141] libmachine: (multinode-773885-m02) Getting domain xml...
	I0223 22:22:13.919047   80620 main.go:141] libmachine: (multinode-773885-m02) Creating domain...
	I0223 22:22:15.148655   80620 main.go:141] libmachine: (multinode-773885-m02) Waiting to get IP...
	I0223 22:22:15.149521   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.149889   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.149974   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.149904   80738 retry.go:31] will retry after 193.258579ms: waiting for machine to come up
	I0223 22:22:15.344335   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.344701   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.344731   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.344650   80738 retry.go:31] will retry after 325.897575ms: waiting for machine to come up
	I0223 22:22:15.672194   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:15.672594   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:15.672628   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.672550   80738 retry.go:31] will retry after 464.389068ms: waiting for machine to come up
	I0223 22:22:16.138184   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:16.138690   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:16.138753   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.138682   80738 retry.go:31] will retry after 418.748231ms: waiting for machine to come up
	I0223 22:22:16.559096   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:16.559605   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:16.559635   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.559550   80738 retry.go:31] will retry after 471.42311ms: waiting for machine to come up
	I0223 22:22:17.033003   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:17.033388   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:17.033425   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.033349   80738 retry.go:31] will retry after 716.223287ms: waiting for machine to come up
	I0223 22:22:17.751192   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:17.751627   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:17.751662   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.751564   80738 retry.go:31] will retry after 829.526019ms: waiting for machine to come up
	I0223 22:22:18.582469   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:18.582861   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:18.582893   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:18.582810   80738 retry.go:31] will retry after 1.314736274s: waiting for machine to come up
	I0223 22:22:19.898527   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:19.898968   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:19.898996   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:19.898923   80738 retry.go:31] will retry after 1.848898641s: waiting for machine to come up
	I0223 22:22:21.749410   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:21.749799   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:21.749831   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:21.749746   80738 retry.go:31] will retry after 1.422968619s: waiting for machine to come up
	I0223 22:22:23.174280   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:23.174762   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:23.174796   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:23.174689   80738 retry.go:31] will retry after 2.26457317s: waiting for machine to come up
	I0223 22:22:25.440649   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:25.441040   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:25.441077   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:25.441025   80738 retry.go:31] will retry after 2.412299301s: waiting for machine to come up
	I0223 22:22:27.856562   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:27.857000   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
	I0223 22:22:27.857029   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:27.856943   80738 retry.go:31] will retry after 3.510265055s: waiting for machine to come up
	I0223 22:22:31.369182   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.369590   80620 main.go:141] libmachine: (multinode-773885-m02) Found IP for machine: 192.168.39.102
	I0223 22:22:31.369622   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.369632   80620 main.go:141] libmachine: (multinode-773885-m02) Reserving static IP address...
	I0223 22:22:31.370012   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.370035   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"}
	I0223 22:22:31.370045   80620 main.go:141] libmachine: (multinode-773885-m02) Reserved static IP address: 192.168.39.102
	I0223 22:22:31.370056   80620 main.go:141] libmachine: (multinode-773885-m02) Waiting for SSH to be available...
	I0223 22:22:31.370068   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Getting to WaitForSSH function...
	I0223 22:22:31.372076   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.372417   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.372440   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.372551   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH client type: external
	I0223 22:22:31.372572   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa (-rw-------)
	I0223 22:22:31.372608   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 22:22:31.372622   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | About to run SSH command:
	I0223 22:22:31.372638   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | exit 0
	I0223 22:22:31.506747   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | SSH cmd err, output: <nil>: 
	I0223 22:22:31.507041   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetConfigRaw
	I0223 22:22:31.507719   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:31.510014   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.510356   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.510390   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.510652   80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
	I0223 22:22:31.510883   80620 machine.go:88] provisioning docker machine ...
	I0223 22:22:31.510909   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:31.511142   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.511321   80620 buildroot.go:166] provisioning hostname "multinode-773885-m02"
	I0223 22:22:31.511339   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.511489   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.513584   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.513939   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.513969   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.514122   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:31.514268   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.514404   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.514532   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:31.514655   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:31.515234   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:31.515255   80620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773885-m02 && echo "multinode-773885-m02" | sudo tee /etc/hostname
	I0223 22:22:31.655693   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885-m02
	
	I0223 22:22:31.655725   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.658407   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.658788   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.658815   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.658999   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:31.659184   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.659347   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:31.659464   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:31.659613   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:31.660176   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:31.660212   80620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773885-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773885-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:22:31.799792   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:22:31.799859   80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
	I0223 22:22:31.799879   80620 buildroot.go:174] setting up certificates
	I0223 22:22:31.799889   80620 provision.go:83] configureAuth start
	I0223 22:22:31.799902   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
	I0223 22:22:31.800252   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:31.803534   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.803989   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.804018   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.804274   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:31.806753   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.807088   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:31.807121   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:31.807237   80620 provision.go:138] copyHostCerts
	I0223 22:22:31.807268   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:22:31.807311   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
	I0223 22:22:31.807324   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
	I0223 22:22:31.807414   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
	I0223 22:22:31.807572   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:22:31.807597   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
	I0223 22:22:31.807602   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
	I0223 22:22:31.807632   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
	I0223 22:22:31.807685   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:22:31.807702   80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
	I0223 22:22:31.807707   80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
	I0223 22:22:31.807729   80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
	I0223 22:22:31.807773   80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885-m02 san=[192.168.39.102 192.168.39.102 localhost 127.0.0.1 minikube multinode-773885-m02]
	I0223 22:22:32.063720   80620 provision.go:172] copyRemoteCerts
	I0223 22:22:32.063776   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:22:32.063800   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.066310   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.066712   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.066742   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.066876   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.067090   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.067230   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.067359   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:32.161807   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:22:32.161874   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 22:22:32.184819   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:22:32.184883   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 22:22:32.206537   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:22:32.206625   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 22:22:32.228031   80620 provision.go:86] duration metric: configureAuth took 428.129514ms
	I0223 22:22:32.228052   80620 buildroot.go:189] setting minikube options for container-runtime
	I0223 22:22:32.228295   80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:22:32.228322   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:32.228634   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.231144   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.231489   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.231520   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.231601   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.231819   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.231999   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.232117   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.232312   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.232708   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.232719   80620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:22:32.365102   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0223 22:22:32.365122   80620 buildroot.go:70] root file system type: tmpfs
	I0223 22:22:32.365241   80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:22:32.365265   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.367818   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.368241   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.368263   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.368492   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.368703   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.368872   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.368982   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.369180   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.369581   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.369639   80620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.240"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:22:32.513495   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.240
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:22:32.513523   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:32.515906   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.516266   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:32.516300   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:32.516468   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:32.516680   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.516873   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:32.517028   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:32.517178   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:32.517625   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:32.517648   80620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:22:33.354684   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0223 22:22:33.354711   80620 machine.go:91] provisioned docker machine in 1.843811829s
	I0223 22:22:33.354721   80620 start.go:300] post-start starting for "multinode-773885-m02" (driver="kvm2")
	I0223 22:22:33.354729   80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:22:33.354752   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.355077   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:22:33.355108   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.357808   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.358150   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.358170   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.358307   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.358509   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.358697   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.358856   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.452337   80620 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:22:33.456207   80620 command_runner.go:130] > NAME=Buildroot
	I0223 22:22:33.456227   80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0223 22:22:33.456233   80620 command_runner.go:130] > ID=buildroot
	I0223 22:22:33.456241   80620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0223 22:22:33.456248   80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0223 22:22:33.456287   80620 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 22:22:33.456303   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
	I0223 22:22:33.456371   80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
	I0223 22:22:33.456462   80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
	I0223 22:22:33.456474   80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
	I0223 22:22:33.456577   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:22:33.464384   80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
	I0223 22:22:33.486196   80620 start.go:303] post-start completed in 131.456152ms
	I0223 22:22:33.486221   80620 fix.go:57] fixHost completed within 19.590489491s
	I0223 22:22:33.486246   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.488925   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.489233   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.489259   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.489444   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.489642   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.489819   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.489958   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.490087   80620 main.go:141] libmachine: Using SSH client type: native
	I0223 22:22:33.490502   80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0223 22:22:33.490517   80620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0223 22:22:33.619595   80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190953.568894594
	
	I0223 22:22:33.619615   80620 fix.go:207] guest clock: 1677190953.568894594
	I0223 22:22:33.619622   80620 fix.go:220] Guest: 2023-02-23 22:22:33.568894594 +0000 UTC Remote: 2023-02-23 22:22:33.48622588 +0000 UTC m=+80.262153220 (delta=82.668714ms)
	I0223 22:22:33.619636   80620 fix.go:191] guest clock delta is within tolerance: 82.668714ms
	I0223 22:22:33.619643   80620 start.go:83] releasing machines lock for "multinode-773885-m02", held for 19.723927358s
	I0223 22:22:33.619668   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.619923   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:22:33.622598   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.623025   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.623058   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.625082   80620 out.go:177] * Found network options:
	I0223 22:22:33.626668   80620 out.go:177]   - NO_PROXY=192.168.39.240
	W0223 22:22:33.628011   80620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:22:33.628044   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628608   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628794   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:22:33.628886   80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:22:33.628929   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	W0223 22:22:33.629039   80620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:22:33.629123   80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:22:33.629150   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:22:33.631754   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.631877   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632173   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.632199   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632233   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:22:33.632253   80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:22:33.632406   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.632530   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:22:33.632612   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.632687   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:22:33.632797   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.632952   80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:22:33.632945   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.633068   80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:22:33.747533   80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:22:33.748590   80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0223 22:22:33.748617   80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 22:22:33.748665   80620 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:22:33.752644   80620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:22:33.752772   80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:22:33.762613   80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:22:33.779129   80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:22:33.794495   80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0223 22:22:33.794614   80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 22:22:33.794634   80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:22:33.794710   80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:22:33.819645   80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:22:33.819665   80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:22:33.819671   80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:22:33.819676   80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:22:33.819680   80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:22:33.819684   80620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:22:33.819688   80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0223 22:22:33.819694   80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:22:33.819697   80620 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0223 22:22:33.819702   80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:22:33.819707   80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0223 22:22:33.821344   80620 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0223 22:22:33.821366   80620 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:22:33.821378   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:22:33.821513   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:22:33.838092   80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:22:33.838113   80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:22:33.838173   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:22:33.849104   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:22:33.860042   80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:22:33.860082   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:22:33.871017   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:22:33.881892   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:22:33.892548   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:22:33.903374   80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:22:33.914628   80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:22:33.925877   80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:22:33.935581   80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:22:33.935636   80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:22:33.945618   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:34.050114   80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:22:34.068154   80620 start.go:485] detecting cgroup driver to use...
	I0223 22:22:34.068229   80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:22:34.089986   80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0223 22:22:34.090009   80620 command_runner.go:130] > [Unit]
	I0223 22:22:34.090019   80620 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:22:34.090033   80620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:22:34.090041   80620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0223 22:22:34.090049   80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0223 22:22:34.090056   80620 command_runner.go:130] > StartLimitBurst=3
	I0223 22:22:34.090063   80620 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:22:34.090072   80620 command_runner.go:130] > [Service]
	I0223 22:22:34.090083   80620 command_runner.go:130] > Type=notify
	I0223 22:22:34.090089   80620 command_runner.go:130] > Restart=on-failure
	I0223 22:22:34.090104   80620 command_runner.go:130] > Environment=NO_PROXY=192.168.39.240
	I0223 22:22:34.090111   80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:22:34.090118   80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:22:34.090150   80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:22:34.090164   80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:22:34.090170   80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:22:34.090176   80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:22:34.090182   80620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:22:34.090190   80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:22:34.090196   80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:22:34.090200   80620 command_runner.go:130] > ExecStart=
	I0223 22:22:34.090213   80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0223 22:22:34.090219   80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:22:34.090224   80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:22:34.090233   80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:22:34.090237   80620 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:22:34.090241   80620 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:22:34.090245   80620 command_runner.go:130] > LimitCORE=infinity
	I0223 22:22:34.090251   80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:22:34.090256   80620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:22:34.090260   80620 command_runner.go:130] > TasksMax=infinity
	I0223 22:22:34.090265   80620 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:22:34.090273   80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:22:34.090279   80620 command_runner.go:130] > Delegate=yes
	I0223 22:22:34.090285   80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:22:34.090293   80620 command_runner.go:130] > KillMode=process
	I0223 22:22:34.090297   80620 command_runner.go:130] > [Install]
	I0223 22:22:34.090302   80620 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:22:34.090359   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:22:34.105030   80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0223 22:22:34.126591   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0223 22:22:34.140060   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:22:34.153929   80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 22:22:34.184699   80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:22:34.197888   80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:22:34.214560   80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:22:34.214588   80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:22:34.214922   80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:22:34.314415   80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:22:34.423777   80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:22:34.423812   80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:22:34.439350   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:34.539377   80620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:22:35.976151   80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.436733266s)
	I0223 22:22:35.976218   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:22:36.088366   80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:22:36.208338   80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:22:36.318554   80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:22:36.423882   80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:22:36.438700   80620 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0223 22:22:36.441277   80620 out.go:177] 
	W0223 22:22:36.442813   80620 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0223 22:22:36.442833   80620 out.go:239] * 
	W0223 22:22:36.443730   80620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 22:22:36.445382   80620 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-02-23 22:21:24 UTC, ends at Thu 2023-02-23 22:22:40 UTC. --
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653197396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653344660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653370552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653655096Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6c05479ab6bded8fa4b510984ebdaff14f9e940ce5f996cbbfa74f89cdf0e4df pid=2349 runtime=io.containerd.runc.v2
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976478317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976529296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976538800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.977357166Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/08db8c8fe66700151ca6e921ec0c7827f3f8b9da2185e6f9b77717b3db2213a2 pid=2641 runtime=io.containerd.runc.v2
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.562985619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563244746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563254901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563554212Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/17bc89f184c67734f2c7bf76e9475c45856ec85a6cc69703a04036b48218a306 pid=2718 runtime=io.containerd.runc.v2
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277252833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277345995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277367820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277588969Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9f2502586a39c34ac304fe5d1a3c0d2111c439b907e9f9955feec5ca5504872d pid=2837 runtime=io.containerd.runc.v2
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887734997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887789077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887798415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887932649Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ec64ae912e0437233e2ff6d3d8ed0b5e64201755fd0b86f988efacd563ac301c pid=2935 runtime=io.containerd.runc.v2
	Feb 23 22:22:26 multinode-773885 dockerd[827]: time="2023-02-23T22:22:26.143265689Z" level=info msg="ignoring event" container=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144112416Z" level=info msg="shim disconnected" id=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144166893Z" level=warning msg="cleaning up after shim disconnected" id=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a namespace=moby
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144202001Z" level=info msg="cleaning up dead shim"
	Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.167427651Z" level=warning msg="cleanup warnings time=\"2023-02-23T22:22:26Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3166 runtime=io.containerd.runc.v2\n"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	ec64ae912e043       8c811b4aec35f                                                                                         29 seconds ago      Running             busybox                   1                   9f2502586a39c
	17bc89f184c67       5185b96f0becf                                                                                         30 seconds ago      Running             coredns                   1                   08db8c8fe6670
	6c05479ab6bde       d6e3e26021b60                                                                                         42 seconds ago      Running             kindnet-cni               1                   e749663c5c7e7
	27a3e00db0cef       6e38f40d628db                                                                                         45 seconds ago      Exited              storage-provisioner       1                   bc303f21527d1
	9454f57758e35       46a6bb3c77ce0                                                                                         45 seconds ago      Running             kube-proxy                1                   7cce6a3412d50
	1e657e364abdc       fce326961ae2d                                                                                         51 seconds ago      Running             etcd                      1                   9832634b69a74
	efd94ac044a0a       655493523f607                                                                                         51 seconds ago      Running             kube-scheduler            1                   6464d18d96882
	6c70297f99403       e9c08e11b07f6                                                                                         51 seconds ago      Running             kube-controller-manager   1                   bff62e4487a30
	1f74fa3dd2e7b       deb04688c4a35                                                                                         51 seconds ago      Running             kube-apiserver            1                   4d2cd9fe6c8db
	80d446e21be45       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   ebbb7d19d9aa3
	a31cf43457e01       5185b96f0becf                                                                                         4 minutes ago       Exited              coredns                   0                   75e472928e30d
	f6b2b873cba93       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              4 minutes ago       Exited              kindnet-cni               0                   f284ce294fa00
	6becaf5c86404       46a6bb3c77ce0                                                                                         4 minutes ago       Exited              kube-proxy                0                   a2a9a29b5a412
	8d29ee663e61d       fce326961ae2d                                                                                         5 minutes ago       Exited              etcd                      0                   3b6e6d975efae
	baad115b76c60       655493523f607                                                                                         5 minutes ago       Exited              kube-scheduler            0                   072b5f08a10f2
	53723346fe3cc       e9c08e11b07f6                                                                                         5 minutes ago       Exited              kube-controller-manager   0                   979e703c6176a
	6a41aad932999       deb04688c4a35                                                                                         5 minutes ago       Exited              kube-apiserver            0                   745d6ec7adf4b
	
	* 
	* ==> coredns [17bc89f184c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:60321 - 9770 "HINFO IN 6662394053686617131.163874164669885542. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.069250639s
	
	* 
	* ==> coredns [a31cf43457e0] <==
	* [INFO] 10.244.1.2:47000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758837s
	[INFO] 10.244.1.2:44690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131616s
	[INFO] 10.244.1.2:37067 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011391s
	[INFO] 10.244.1.2:38424 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001108385s
	[INFO] 10.244.1.2:47838 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089356s
	[INFO] 10.244.1.2:41552 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106594s
	[INFO] 10.244.1.2:51630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135553s
	[INFO] 10.244.0.3:55853 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122391s
	[INFO] 10.244.0.3:35953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008752s
	[INFO] 10.244.0.3:56239 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083093s
	[INFO] 10.244.0.3:38385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083481s
	[INFO] 10.244.1.2:53920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283555s
	[INFO] 10.244.1.2:34363 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000773507s
	[INFO] 10.244.1.2:54662 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081096s
	[INFO] 10.244.1.2:48627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000266217s
	[INFO] 10.244.0.3:54203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197101s
	[INFO] 10.244.0.3:52399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162405s
	[INFO] 10.244.0.3:45614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000234431s
	[INFO] 10.244.0.3:47751 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134862s
	[INFO] 10.244.1.2:53869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201736s
	[INFO] 10.244.1.2:43680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175885s
	[INFO] 10.244.1.2:45494 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167403s
	[INFO] 10.244.1.2:52027 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017095s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-773885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=multinode-773885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T22_17_39_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773885
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:22:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:22:04 +0000   Thu, 23 Feb 2023 22:22:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    multinode-773885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1475187eff99446eb4f7e011051cc8fa
	  System UUID:                1475187e-ff99-446e-b4f7-e011051cc8fa
	  Boot ID:                    4d4d0a54-af2e-49a7-a9dd-250c866abcb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-9b7sp                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-787d4945fb-ktr7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m50s
	  kube-system                 etcd-multinode-773885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-p64zr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m50s
	  kube-system                 kube-apiserver-multinode-773885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-multinode-773885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-mdjks                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-multinode-773885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  Starting                 44s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  5m15s (x5 over 5m15s)  kubelet          Node multinode-773885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s (x5 over 5m15s)  kubelet          Node multinode-773885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s (x5 over 5m15s)  kubelet          Node multinode-773885 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     5m2s                   kubelet          Node multinode-773885 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m2s                   kubelet          Node multinode-773885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s                   kubelet          Node multinode-773885 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m2s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m51s                  node-controller  Node multinode-773885 event: Registered Node multinode-773885 in Controller
	  Normal  NodeReady                4m39s                  kubelet          Node multinode-773885 status is now: NodeReady
	  Normal  Starting                 53s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)      kubelet          Node multinode-773885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)      kubelet          Node multinode-773885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 52s)      kubelet          Node multinode-773885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  52s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                    node-controller  Node multinode-773885 event: Registered Node multinode-773885 in Controller
	
	
	Name:               multinode-773885-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773885-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773885-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:20:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:19:17 +0000   Thu, 23 Feb 2023 22:18:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    multinode-773885-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb9064ecea5b4e79869f499ba8bce75c
	  System UUID:                fb9064ec-ea5b-4e79-869f-499ba8bce75c
	  Boot ID:                    4be4ac98-4af3-4b16-af45-9c05c30bb17d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-zscjg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kindnet-fg44s               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-5d5vn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node multinode-773885-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node multinode-773885-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node multinode-773885-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller
	  Normal  NodeReady                3m41s                  kubelet          Node multinode-773885-m02 status is now: NodeReady
	  Normal  RegisteredNode           34s                    node-controller  Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Feb23 22:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.955731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.280486] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148289] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.553293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.692232] systemd-fstab-generator[510]: Ignoring "noauto" for root device
	[  +0.095720] systemd-fstab-generator[527]: Ignoring "noauto" for root device
	[  +1.185288] systemd-fstab-generator[758]: Ignoring "noauto" for root device
	[  +0.248453] systemd-fstab-generator[792]: Ignoring "noauto" for root device
	[  +0.102398] systemd-fstab-generator[803]: Ignoring "noauto" for root device
	[  +0.122364] systemd-fstab-generator[816]: Ignoring "noauto" for root device
	[  +1.531595] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.111043] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +0.104179] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +0.097652] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[ +11.667470] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +0.392417] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.206240] kauditd_printk_skb: 8 callbacks suppressed
	[Feb23 22:22] kauditd_printk_skb: 16 callbacks suppressed
	
	* 
	* ==> etcd [1e657e364abd] <==
	* {"level":"info","ts":"2023-02-23T22:21:50.930Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T22:21:50.930Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T22:21:50.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 switched to configuration voters=(2080375272429567737)"}
	{"level":"info","ts":"2023-02-23T22:21:50.932Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","added-peer-id":"1cdefa49b8abbef9","added-peer-peer-urls":["https://192.168.39.240:2380"]}
	{"level":"info","ts":"2023-02-23T22:21:50.933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:21:50.934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:21:50.954Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"1cdefa49b8abbef9","initial-advertise-peer-urls":["https://192.168.39.240:2380"],"listen-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.240:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:21:50.958Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgPreVoteResp from 1cdefa49b8abbef9 at term 2"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became candidate at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgVoteResp from 1cdefa49b8abbef9 at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became leader at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1cdefa49b8abbef9 elected leader 1cdefa49b8abbef9 at term 3"}
	{"level":"info","ts":"2023-02-23T22:21:52.080Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"1cdefa49b8abbef9","local-member-attributes":"{Name:multinode-773885 ClientURLs:[https://192.168.39.240:2379]}","request-path":"/0/members/1cdefa49b8abbef9/attributes","cluster-id":"e0745912b0778b6e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:21:52.080Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:21:52.083Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.240:2379"}
	{"level":"info","ts":"2023-02-23T22:21:52.084Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [8d29ee663e61] <==
	* {"level":"info","ts":"2023-02-23T22:17:32.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgVoteResp from 1cdefa49b8abbef9 at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1cdefa49b8abbef9 elected leader 1cdefa49b8abbef9 at term 2"}
	{"level":"info","ts":"2023-02-23T22:17:32.484Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:17:32.487Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"1cdefa49b8abbef9","local-member-attributes":"{Name:multinode-773885 ClientURLs:[https://192.168.39.240:2379]}","request-path":"/0/members/1cdefa49b8abbef9/attributes","cluster-id":"e0745912b0778b6e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:17:32.488Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:17:32.492Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.240:2379"}
	{"level":"info","ts":"2023-02-23T22:17:32.489Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:17:32.496Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:17:32.489Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:17:32.503Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:17:32.504Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:17:32.504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:17:32.507Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-02-23T22:18:39.794Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.910442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-23T22:18:39.794Z","caller":"traceutil/trace.go:171","msg":"trace[1229332276] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"155.153979ms","start":"2023-02-23T22:18:39.639Z","end":"2023-02-23T22:18:39.794Z","steps":["trace[1229332276] 'range keys from in-memory index tree'  (duration: 154.79846ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T22:19:39.387Z","caller":"traceutil/trace.go:171","msg":"trace[841849164] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"239.425375ms","start":"2023-02-23T22:19:39.147Z","end":"2023-02-23T22:19:39.387Z","steps":["trace[841849164] 'process raft request'  (duration: 239.262494ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T22:19:41.080Z","caller":"traceutil/trace.go:171","msg":"trace[146502320] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"106.873274ms","start":"2023-02-23T22:19:40.973Z","end":"2023-02-23T22:19:41.080Z","steps":["trace[146502320] 'process raft request'  (duration: 106.732936ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T22:20:45.246Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-23T22:20:45.246Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-773885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	{"level":"info","ts":"2023-02-23T22:20:45.273Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"1cdefa49b8abbef9"}
	{"level":"info","ts":"2023-02-23T22:20:45.277Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:20:45.285Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2023-02-23T22:20:45.285Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-773885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	
	* 
	* ==> kernel <==
	*  22:22:41 up 1 min,  0 users,  load average: 0.60, 0.19, 0.07
	Linux multinode-773885 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [6c05479ab6bd] <==
	* I0223 22:21:59.629537       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:21:59.629545       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:21:59.629690       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0} 
	I0223 22:22:09.634203       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:09.634224       1 main.go:227] handling current node
	I0223 22:22:09.634233       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:09.634237       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:22:09.634329       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:22:09.634334       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:22:19.648879       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:19.649253       1 main.go:227] handling current node
	I0223 22:22:19.649329       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:19.649426       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:22:19.649553       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:22:19.649592       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:22:29.663056       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:29.663342       1 main.go:227] handling current node
	I0223 22:22:29.663589       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:29.663639       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:22:29.663927       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:22:29.663981       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:22:39.669294       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:22:39.669316       1 main.go:227] handling current node
	I0223 22:22:39.669334       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:22:39.669340       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kindnet [f6b2b873cba9] <==
	* I0223 22:20:08.782335       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:08.782366       1 main.go:227] handling current node
	I0223 22:20:08.782378       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:08.782383       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:08.782498       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:08.782503       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24] 
	I0223 22:20:18.789034       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:18.789102       1 main.go:227] handling current node
	I0223 22:20:18.789112       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:18.789118       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:18.789480       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:18.789490       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24] 
	I0223 22:20:28.797182       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:28.797218       1 main.go:227] handling current node
	I0223 22:20:28.797230       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:28.797238       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:28.797428       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:28.797438       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24] 
	I0223 22:20:38.808257       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0223 22:20:38.808531       1 main.go:227] handling current node
	I0223 22:20:38.808612       1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
	I0223 22:20:38.808735       1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24] 
	I0223 22:20:38.808954       1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
	I0223 22:20:38.809162       1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24] 
	I0223 22:20:38.809406       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [1f74fa3dd2e7] <==
	* I0223 22:21:53.767701       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0223 22:21:53.767780       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0223 22:21:53.763570       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0223 22:21:53.767927       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0223 22:21:53.807375       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0223 22:21:53.807485       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0223 22:21:53.845960       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:21:53.860908       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 22:21:53.860943       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:21:53.861339       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:21:53.865182       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:21:53.875653       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:21:53.875809       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:21:53.875948       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:21:53.875961       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 22:21:53.941378       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0223 22:21:54.514978       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:21:54.778557       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:21:56.611533       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:21:56.743211       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:21:56.752344       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:21:56.816590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:21:56.823384       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 22:22:06.886425       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 22:22:06.981775       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [6a41aad93299] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:20:55.126061       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:20:55.154966       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:20:55.192941       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [53723346fe3c] <==
	* I0223 22:18:04.424086       1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0223 22:18:46.708565       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m02" does not exist
	I0223 22:18:46.720411       1 range_allocator.go:372] Set node multinode-773885-m02 PodCIDR to [10.244.1.0/24]
	I0223 22:18:46.740966       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fg44s"
	I0223 22:18:46.741018       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5d5vn"
	W0223 22:18:49.432085       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-773885-m02. Assuming now as a timestamp.
	I0223 22:18:49.432675       1 event.go:294] "Event occurred" object="multinode-773885-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller"
	W0223 22:18:59.747513       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	I0223 22:19:02.090093       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 22:19:02.101165       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-zscjg"
	I0223 22:19:02.114911       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-9b7sp"
	I0223 22:19:04.450628       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-zscjg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-zscjg"
	W0223 22:19:46.421861       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	W0223 22:19:46.423059       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m03" does not exist
	I0223 22:19:46.438555       1 range_allocator.go:372] Set node multinode-773885-m03 PodCIDR to [10.244.2.0/24]
	I0223 22:19:46.456557       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-psgdt"
	I0223 22:19:46.456590       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fbfsf"
	I0223 22:19:49.459354       1 event.go:294] "Event occurred" object="multinode-773885-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773885-m03 event: Registered Node multinode-773885-m03 in Controller"
	W0223 22:19:49.460425       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-773885-m03. Assuming now as a timestamp.
	W0223 22:19:59.274458       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	W0223 22:20:33.012085       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	W0223 22:20:34.095715       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m03" does not exist
	W0223 22:20:34.096409       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	I0223 22:20:34.104228       1 range_allocator.go:372] Set node multinode-773885-m03 PodCIDR to [10.244.3.0/24]
	W0223 22:20:42.177970       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m03 node
	
	* 
	* ==> kube-controller-manager [6c70297f9940] <==
	* I0223 22:22:06.874261       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0223 22:22:06.874514       1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0223 22:22:06.874727       1 shared_informer.go:280] Caches are synced for persistent volume
	I0223 22:22:06.874139       1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client
	I0223 22:22:06.874151       1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving
	I0223 22:22:06.885778       1 shared_informer.go:280] Caches are synced for namespace
	I0223 22:22:06.887045       1 shared_informer.go:280] Caches are synced for node
	I0223 22:22:06.887199       1 range_allocator.go:167] Sending events to api server.
	I0223 22:22:06.887268       1 range_allocator.go:171] Starting range CIDR allocator
	I0223 22:22:06.887457       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0223 22:22:06.887727       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0223 22:22:06.894791       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0223 22:22:06.902215       1 shared_informer.go:280] Caches are synced for attach detach
	I0223 22:22:06.907056       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0223 22:22:06.947594       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0223 22:22:06.985123       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:06.986536       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:07.004087       1 shared_informer.go:280] Caches are synced for crt configmap
	I0223 22:22:07.022102       1 shared_informer.go:280] Caches are synced for deployment
	I0223 22:22:07.024559       1 shared_informer.go:280] Caches are synced for disruption
	I0223 22:22:07.043836       1 shared_informer.go:280] Caches are synced for bootstrap_signer
	I0223 22:22:07.418122       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:22:07.418162       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 22:22:07.423312       1 shared_informer.go:280] Caches are synced for garbage collector
	W0223 22:22:39.253877       1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
	
	* 
	* ==> kube-proxy [6becaf5c8640] <==
	* I0223 22:17:52.428519       1 node.go:163] Successfully retrieved node IP: 192.168.39.240
	I0223 22:17:52.428776       1 server_others.go:109] "Detected node IP" address="192.168.39.240"
	I0223 22:17:52.429048       1 server_others.go:535] "Using iptables proxy"
	I0223 22:17:52.471955       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0223 22:17:52.472202       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:17:52.472334       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:17:52.472860       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:17:52.473096       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:17:52.473898       1 config.go:317] "Starting service config controller"
	I0223 22:17:52.474393       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:17:52.474564       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:17:52.474637       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:17:52.476441       1 config.go:444] "Starting node config controller"
	I0223 22:17:52.476591       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:17:52.575596       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:17:52.575638       1 shared_informer.go:280] Caches are synced for service config
	I0223 22:17:52.577063       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [9454f57758e3] <==
	* I0223 22:21:55.723163       1 node.go:163] Successfully retrieved node IP: 192.168.39.240
	I0223 22:21:55.729131       1 server_others.go:109] "Detected node IP" address="192.168.39.240"
	I0223 22:21:55.733751       1 server_others.go:535] "Using iptables proxy"
	I0223 22:21:56.081608       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0223 22:21:56.081932       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:21:56.083401       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:21:56.084774       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:21:56.203479       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:21:56.205085       1 config.go:317] "Starting service config controller"
	I0223 22:21:56.205493       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:21:56.205674       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:21:56.205782       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:21:56.206845       1 config.go:444] "Starting node config controller"
	I0223 22:21:56.208637       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:21:56.348283       1 shared_informer.go:280] Caches are synced for node config
	I0223 22:21:56.351314       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:21:56.363180       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [baad115b76c6] <==
	* W0223 22:17:34.610009       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:34.610030       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:34.611025       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:34.611092       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:34.613999       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:17:34.614066       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 22:17:34.614149       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 22:17:34.614173       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 22:17:34.614213       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 22:17:34.614265       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 22:17:35.487184       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:17:35.487376       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:17:35.632170       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:35.632547       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:35.721529       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 22:17:35.721738       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 22:17:35.755180       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 22:17:35.755382       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 22:17:35.761259       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 22:17:35.761432       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 22:17:36.073523       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 22:17:36.074101       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 22:17:38.782901       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0223 22:20:45.176065       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0223 22:20:45.176491       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [efd94ac044a0] <==
	* I0223 22:21:51.487920       1 serving.go:348] Generated self-signed cert in-memory
	W0223 22:21:53.821119       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0223 22:21:53.821286       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 22:21:53.821327       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0223 22:21:53.821848       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0223 22:21:53.856843       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0223 22:21:53.857373       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:21:53.859249       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0223 22:21:53.859546       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0223 22:21:53.860180       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 22:21:53.859587       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 22:21:53.960971       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-02-23 22:21:24 UTC, ends at Thu 2023-02-23 22:22:41 UTC. --
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789777    1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789834    1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789892    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:21:58.789875256 +0000 UTC m=+11.061994009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:57 multinode-773885 kubelet[1292]: E0223 22:21:57.695471    1292 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 23 22:21:57 multinode-773885 kubelet[1292]: E0223 22:21:57.696044    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume podName:5337fe89-b5a2-4562-84e3-3a7e1f201ff5 nodeName:}" failed. No retries permitted until 2023-02-23 22:22:01.695966879 +0000 UTC m=+13.968085633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume") pod "coredns-787d4945fb-ktr7h" (UID: "5337fe89-b5a2-4562-84e3-3a7e1f201ff5") : object "kube-system"/"coredns" not registered
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.167577    1292 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: I0223 22:21:58.564631    1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e749663c5c7e738a06bd131433cc331bdfe0302f4ed8652dc72907fd84e75f7f"
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.592064    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808766    1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808798    1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808843    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:22:02.808830445 +0000 UTC m=+15.080949197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:21:59 multinode-773885 kubelet[1292]: E0223 22:21:59.637649    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:22:00 multinode-773885 kubelet[1292]: E0223 22:22:00.141319    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
	Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.140900    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.730126    1292 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.730215    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume podName:5337fe89-b5a2-4562-84e3-3a7e1f201ff5 nodeName:}" failed. No retries permitted until 2023-02-23 22:22:09.730200815 +0000 UTC m=+22.002319582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume") pod "coredns-787d4945fb-ktr7h" (UID: "5337fe89-b5a2-4562-84e3-3a7e1f201ff5") : object "kube-system"/"coredns" not registered
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.141217    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838248    1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838298    1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838347    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:22:10.838331472 +0000 UTC m=+23.110450224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
	Feb 23 22:22:03 multinode-773885 kubelet[1292]: E0223 22:22:03.140982    1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
	Feb 23 22:22:26 multinode-773885 kubelet[1292]: I0223 22:22:26.975727    1292 scope.go:115] "RemoveContainer" containerID="b83daa4cdd8d8298126a07aab8f78401afc75993bca101cbb72ec10217214496"
	Feb 23 22:22:26 multinode-773885 kubelet[1292]: I0223 22:22:26.976270    1292 scope.go:115] "RemoveContainer" containerID="27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a"
	Feb 23 22:22:26 multinode-773885 kubelet[1292]: E0223 22:22:26.976460    1292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62cc7ef3-a47f-45ce-a9af-cf4de3e1824d)\"" pod="kube-system/storage-provisioner" podUID=62cc7ef3-a47f-45ce-a9af-cf4de3e1824d
	Feb 23 22:22:41 multinode-773885 kubelet[1292]: I0223 22:22:41.141351    1292 scope.go:115] "RemoveContainer" containerID="27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-773885 -n multinode-773885
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-773885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (3.31s)

                                                
                                    

Test pass (275/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.08
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.1/json-events 3.67
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.37
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
19 TestBinaryMirror 0.63
20 TestOffline 130.98
22 TestAddons/Setup 150.72
24 TestAddons/parallel/Registry 16.61
25 TestAddons/parallel/Ingress 29.89
26 TestAddons/parallel/MetricsServer 5.6
27 TestAddons/parallel/HelmTiller 19.89
29 TestAddons/parallel/CSI 64.57
30 TestAddons/parallel/Headlamp 12.18
31 TestAddons/parallel/CloudSpanner 5.45
34 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/StoppedEnableDisable 13.27
36 TestCertOptions 87.13
37 TestCertExpiration 292.39
38 TestDockerFlags 64.78
39 TestForceSystemdFlag 85.55
40 TestForceSystemdEnv 66.08
41 TestKVMDriverInstallOrUpdate 4.67
45 TestErrorSpam/setup 56.48
46 TestErrorSpam/start 0.35
47 TestErrorSpam/status 0.76
48 TestErrorSpam/pause 1.2
49 TestErrorSpam/unpause 1.32
50 TestErrorSpam/stop 12.58
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 74.69
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 42.76
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.1
61 TestFunctional/serial/CacheCmd/cache/add_remote 5.1
62 TestFunctional/serial/CacheCmd/cache/add_local 1.52
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
67 TestFunctional/serial/CacheCmd/cache/delete 0.1
68 TestFunctional/serial/MinikubeKubectlCmd 0.11
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
70 TestFunctional/serial/ExtraConfig 49.48
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.14
73 TestFunctional/serial/LogsFileCmd 1.2
75 TestFunctional/parallel/ConfigCmd 0.37
76 TestFunctional/parallel/DashboardCmd 25.06
77 TestFunctional/parallel/DryRun 0.28
78 TestFunctional/parallel/InternationalLanguage 0.15
79 TestFunctional/parallel/StatusCmd 1.12
83 TestFunctional/parallel/ServiceCmdConnect 9.65
84 TestFunctional/parallel/AddonsCmd 0.19
85 TestFunctional/parallel/PersistentVolumeClaim 52.18
87 TestFunctional/parallel/SSHCmd 0.44
88 TestFunctional/parallel/CpCmd 1.04
89 TestFunctional/parallel/MySQL 35.36
90 TestFunctional/parallel/FileSync 0.4
91 TestFunctional/parallel/CertSync 1.51
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
99 TestFunctional/parallel/License 0.15
100 TestFunctional/parallel/Version/short 0.05
101 TestFunctional/parallel/Version/components 0.64
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.42
107 TestFunctional/parallel/ImageCommands/Setup 1.32
108 TestFunctional/parallel/DockerEnv/bash 1.07
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.35
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.65
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.76
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.37
116 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.5
117 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.9
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.34
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
129 TestFunctional/parallel/ProfileCmd/profile_list 0.32
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
131 TestFunctional/parallel/MountCmd/any-port 16.02
132 TestFunctional/parallel/MountCmd/specific-port 1.92
133 TestFunctional/delete_addon-resizer_images 0.16
134 TestFunctional/delete_my-image_image 0.06
135 TestFunctional/delete_minikube_cached_images 0.06
136 TestGvisorAddon 340.18
139 TestImageBuild/serial/NormalBuild 2.46
140 TestImageBuild/serial/BuildWithBuildArg 1.51
141 TestImageBuild/serial/BuildWithDockerIgnore 0.48
142 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.33
145 TestIngressAddonLegacy/StartLegacyK8sCluster 87.33
147 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.73
148 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.45
149 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.86
152 TestJSONOutput/start/Command 69.82
153 TestJSONOutput/start/Audit 0
155 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/pause/Command 0.59
159 TestJSONOutput/pause/Audit 0
161 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/unpause/Command 0.56
165 TestJSONOutput/unpause/Audit 0
167 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/stop/Command 13.1
171 TestJSONOutput/stop/Audit 0
173 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
175 TestErrorJSONOutput 0.43
180 TestMainNoArgs 0.05
181 TestMinikubeProfile 114.8
184 TestMountStart/serial/StartWithMountFirst 27.87
185 TestMountStart/serial/VerifyMountFirst 0.39
186 TestMountStart/serial/StartWithMountSecond 33.9
187 TestMountStart/serial/VerifyMountSecond 0.37
188 TestMountStart/serial/DeleteFirst 1.09
189 TestMountStart/serial/VerifyMountPostDelete 0.37
190 TestMountStart/serial/Stop 2.08
191 TestMountStart/serial/RestartStopped 26.62
192 TestMountStart/serial/VerifyMountPostStop 0.4
195 TestMultiNode/serial/FreshStart2Nodes 137.16
196 TestMultiNode/serial/DeployApp2Nodes 4.77
197 TestMultiNode/serial/PingHostFrom2Pods 0.97
198 TestMultiNode/serial/AddNode 54.36
199 TestMultiNode/serial/ProfileList 0.26
200 TestMultiNode/serial/CopyFile 7.47
201 TestMultiNode/serial/StopNode 3.94
202 TestMultiNode/serial/StartAfterStop 31.04
205 TestMultiNode/serial/StopMultiNode 112.18
206 TestMultiNode/serial/RestartMultiNode 105.5
207 TestMultiNode/serial/ValidateNameConflict 58.03
212 TestPreload 167.85
214 TestScheduledStopUnix 128.41
215 TestSkaffold 88.75
218 TestRunningBinaryUpgrade 166.52
220 TestKubernetesUpgrade 220.11
222 TestStoppedBinaryUpgrade/Setup 0.36
234 TestStoppedBinaryUpgrade/Upgrade 188.06
243 TestPause/serial/Start 113.36
244 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
247 TestNoKubernetes/serial/StartWithK8s 96.74
248 TestPause/serial/SecondStartNoReconfiguration 68.63
249 TestPause/serial/Pause 2.34
250 TestPause/serial/VerifyStatus 0.27
251 TestPause/serial/Unpause 0.81
252 TestPause/serial/PauseAgain 0.92
253 TestNoKubernetes/serial/StartWithStopK8s 7.99
254 TestPause/serial/DeletePaused 1.12
255 TestPause/serial/VerifyDeletedResources 0.76
256 TestNoKubernetes/serial/Start 50.46
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 66.82
259 TestNoKubernetes/serial/Stop 2.12
260 TestNoKubernetes/serial/StartNoArgs 28.63
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
262 TestNetworkPlugins/group/auto/Start 100.05
263 TestNetworkPlugins/group/kindnet/Start 88.07
264 TestNetworkPlugins/group/calico/Start 128.54
265 TestNetworkPlugins/group/custom-flannel/Start 122.75
266 TestNetworkPlugins/group/auto/KubeletFlags 0.2
267 TestNetworkPlugins/group/auto/NetCatPod 11.29
268 TestNetworkPlugins/group/auto/DNS 0.19
269 TestNetworkPlugins/group/auto/Localhost 0.15
270 TestNetworkPlugins/group/auto/HairPin 0.2
271 TestNetworkPlugins/group/false/Start 95.23
272 TestNetworkPlugins/group/kindnet/ControllerPod 5.13
273 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
274 TestNetworkPlugins/group/kindnet/NetCatPod 14.37
275 TestNetworkPlugins/group/kindnet/DNS 0.22
276 TestNetworkPlugins/group/kindnet/Localhost 0.2
277 TestNetworkPlugins/group/kindnet/HairPin 0.25
278 TestNetworkPlugins/group/enable-default-cni/Start 126.73
279 TestNetworkPlugins/group/calico/ControllerPod 5.03
280 TestNetworkPlugins/group/calico/KubeletFlags 0.23
281 TestNetworkPlugins/group/calico/NetCatPod 14.49
282 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
283 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.37
284 TestNetworkPlugins/group/calico/DNS 0.2
285 TestNetworkPlugins/group/calico/Localhost 0.18
286 TestNetworkPlugins/group/calico/HairPin 0.17
287 TestNetworkPlugins/group/custom-flannel/DNS 0.24
288 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
289 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
290 TestNetworkPlugins/group/false/KubeletFlags 0.25
291 TestNetworkPlugins/group/false/NetCatPod 13.43
292 TestNetworkPlugins/group/flannel/Start 90.58
293 TestNetworkPlugins/group/bridge/Start 106.18
294 TestNetworkPlugins/group/false/DNS 0.22
295 TestNetworkPlugins/group/false/Localhost 0.2
296 TestNetworkPlugins/group/false/HairPin 0.19
297 TestNetworkPlugins/group/kubenet/Start 113.91
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.37
300 TestNetworkPlugins/group/flannel/ControllerPod 5.02
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
302 TestNetworkPlugins/group/flannel/NetCatPod 12.34
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.29
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
306 TestNetworkPlugins/group/flannel/DNS 0.23
307 TestNetworkPlugins/group/flannel/Localhost 0.22
308 TestNetworkPlugins/group/flannel/HairPin 0.2
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
310 TestNetworkPlugins/group/bridge/NetCatPod 13.35
312 TestStartStop/group/old-k8s-version/serial/FirstStart 150.3
313 TestNetworkPlugins/group/bridge/DNS 0.22
314 TestNetworkPlugins/group/bridge/Localhost 0.21
315 TestNetworkPlugins/group/bridge/HairPin 0.19
317 TestStartStop/group/no-preload/serial/FirstStart 117.9
318 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
319 TestNetworkPlugins/group/kubenet/NetCatPod 11.3
321 TestStartStop/group/embed-certs/serial/FirstStart 141.5
322 TestNetworkPlugins/group/kubenet/DNS 0.23
323 TestNetworkPlugins/group/kubenet/Localhost 0.18
324 TestNetworkPlugins/group/kubenet/HairPin 0.2
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 114.3
327 TestStartStop/group/no-preload/serial/DeployApp 10.58
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
329 TestStartStop/group/no-preload/serial/Stop 13.13
330 TestStartStop/group/old-k8s-version/serial/DeployApp 10.45
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
332 TestStartStop/group/no-preload/serial/SecondStart 613.3
333 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
334 TestStartStop/group/old-k8s-version/serial/Stop 13.14
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.47
336 TestStartStop/group/embed-certs/serial/DeployApp 10.51
337 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
338 TestStartStop/group/old-k8s-version/serial/SecondStart 459.64
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.15
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
342 TestStartStop/group/embed-certs/serial/Stop 13.13
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 347.05
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
346 TestStartStop/group/embed-certs/serial/SecondStart 335.87
347 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
348 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
350 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
351 TestStartStop/group/embed-certs/serial/Pause 2.62
353 TestStartStop/group/newest-cni/serial/FirstStart 76.56
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.68
357 TestStartStop/group/newest-cni/serial/DeployApp 0
358 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
359 TestStartStop/group/newest-cni/serial/Stop 13.12
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
361 TestStartStop/group/newest-cni/serial/SecondStart 47.31
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
365 TestStartStop/group/old-k8s-version/serial/Pause 2.49
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
369 TestStartStop/group/newest-cni/serial/Pause 2.25
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
372 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
373 TestStartStop/group/no-preload/serial/Pause 2.41
x
+
TestDownloadOnly/v1.16.0/json-events (6.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-286078 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-286078 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (6.075740725s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-286078
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-286078: exit status 85 (62.857746ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-286078 | jenkins | v1.29.0 | 23 Feb 23 21:59 UTC |          |
	|         | -p download-only-286078        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 21:59:06
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 21:59:06.578724   66939 out.go:296] Setting OutFile to fd 1 ...
	I0223 21:59:06.578913   66939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:06.578941   66939 out.go:309] Setting ErrFile to fd 2...
	I0223 21:59:06.578957   66939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:06.579379   66939 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	W0223 21:59:06.579547   66939 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-59858/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-59858/.minikube/config/config.json: no such file or directory
	I0223 21:59:06.580164   66939 out.go:303] Setting JSON to true
	I0223 21:59:06.580947   66939 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6099,"bootTime":1677183448,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 21:59:06.581009   66939 start.go:135] virtualization: kvm guest
	I0223 21:59:06.583606   66939 out.go:97] [download-only-286078] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 21:59:06.585086   66939 out.go:169] MINIKUBE_LOCATION=15909
	W0223 21:59:06.583718   66939 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 21:59:06.583747   66939 notify.go:220] Checking for updates...
	I0223 21:59:06.587686   66939 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 21:59:06.588913   66939 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 21:59:06.590236   66939 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 21:59:06.591487   66939 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 21:59:06.593939   66939 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 21:59:06.594165   66939 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 21:59:06.628599   66939 out.go:97] Using the kvm2 driver based on user configuration
	I0223 21:59:06.628621   66939 start.go:296] selected driver: kvm2
	I0223 21:59:06.628633   66939 start.go:857] validating driver "kvm2" against <nil>
	I0223 21:59:06.628938   66939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 21:59:06.629020   66939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-59858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 21:59:06.644223   66939 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 21:59:06.644276   66939 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 21:59:06.644771   66939 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0223 21:59:06.644916   66939 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 21:59:06.644945   66939 cni.go:84] Creating CNI manager for ""
	I0223 21:59:06.644960   66939 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 21:59:06.644966   66939 start_flags.go:319] config:
	{Name:download-only-286078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-286078 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 21:59:06.645157   66939 iso.go:125] acquiring lock: {Name:mka4f25d544a3ff8c2a2fab814177dd4b23f9fc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 21:59:06.647056   66939 out.go:97] Downloading VM boot image ...
	I0223 21:59:06.647080   66939 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15909-59858/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
	I0223 21:59:08.758619   66939 out.go:97] Starting control plane node download-only-286078 in cluster download-only-286078
	I0223 21:59:08.758675   66939 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 21:59:08.784041   66939 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 21:59:08.784079   66939 cache.go:57] Caching tarball of preloaded images
	I0223 21:59:08.784246   66939 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 21:59:08.785791   66939 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0223 21:59:08.785806   66939 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:08.808212   66939 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 21:59:11.172899   66939 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:11.172986   66939 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-286078"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (3.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-286078 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-286078 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=kvm2 : (3.667584583s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (3.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-286078
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-286078: exit status 85 (60.780294ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-286078 | jenkins | v1.29.0 | 23 Feb 23 21:59 UTC |          |
	|         | -p download-only-286078        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-286078 | jenkins | v1.29.0 | 23 Feb 23 21:59 UTC |          |
	|         | -p download-only-286078        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 21:59:12
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 21:59:12.718994   66975 out.go:296] Setting OutFile to fd 1 ...
	I0223 21:59:12.719138   66975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:12.719145   66975 out.go:309] Setting ErrFile to fd 2...
	I0223 21:59:12.719150   66975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:12.719252   66975 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	W0223 21:59:12.719369   66975 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-59858/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-59858/.minikube/config/config.json: no such file or directory
	I0223 21:59:12.719766   66975 out.go:303] Setting JSON to true
	I0223 21:59:12.720585   66975 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6105,"bootTime":1677183448,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 21:59:12.720641   66975 start.go:135] virtualization: kvm guest
	I0223 21:59:12.722878   66975 out.go:97] [download-only-286078] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 21:59:12.724418   66975 out.go:169] MINIKUBE_LOCATION=15909
	I0223 21:59:12.723022   66975 notify.go:220] Checking for updates...
	I0223 21:59:12.727099   66975 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 21:59:12.728488   66975 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 21:59:12.729747   66975 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 21:59:12.731040   66975 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-286078"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-286078
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-124982 --alsologtostderr --binary-mirror http://127.0.0.1:35209 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-124982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-124982
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (130.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-325944 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-325944 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m9.075242313s)
helpers_test.go:175: Cleaning up "offline-docker-325944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-325944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-325944: (1.901645811s)
--- PASS: TestOffline (130.98s)

                                                
                                    
x
+
TestAddons/Setup (150.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-476957 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-476957 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.718736698s)
--- PASS: TestAddons/Setup (150.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 19.557135ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nsrgz" [796008ec-cbeb-416f-90b4-aed14a164700] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012624077s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-djbdk" [bab30101-8120-4c6f-ba87-74c22cef0f42] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013995859s
addons_test.go:305: (dbg) Run:  kubectl --context addons-476957 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-476957 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-476957 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.882225251s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 ip
2023/02/23 22:02:04 [DEBUG] GET http://192.168.39.123:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-476957 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-476957 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-476957 replace --force -f testdata/nginx-ingress-v1.yaml: (1.046282776s)
addons_test.go:210: (dbg) Run:  kubectl --context addons-476957 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [44e42eb9-66da-4521-b731-f4764bfb7780] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [44e42eb9-66da-4521-b731-f4764bfb7780] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.007065392s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-476957 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.123
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-476957 addons disable ingress-dns --alsologtostderr -v=1: (1.197271475s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-476957 addons disable ingress --alsologtostderr -v=1: (7.588242944s)
--- PASS: TestAddons/parallel/Ingress (29.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 4.236945ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-bqhnf" [2b5975b0-d302-49d0-9a30-e93664cb2e21] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015346547s
addons_test.go:380: (dbg) Run:  kubectl --context addons-476957 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.89s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 3.093011ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-nxmx6" [b241205a-5f8b-4014-8115-e93f89d4b372] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.018365054s
addons_test.go:438: (dbg) Run:  kubectl --context addons-476957 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-476957 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (14.318450319s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 24.5094ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-476957 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-476957 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [667dad9a-f820-4d42-affc-47f409ced095] Pending
helpers_test.go:344: "task-pv-pod" [667dad9a-f820-4d42-affc-47f409ced095] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [667dad9a-f820-4d42-affc-47f409ced095] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.014527249s
addons_test.go:549: (dbg) Run:  kubectl --context addons-476957 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-476957 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-476957 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-476957 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-476957 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-476957 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-476957 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [62a56161-8caf-4b84-8d7a-93654351fb4f] Pending
helpers_test.go:344: "task-pv-pod-restore" [62a56161-8caf-4b84-8d7a-93654351fb4f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [62a56161-8caf-4b84-8d7a-93654351fb4f] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.015165208s
addons_test.go:591: (dbg) Run:  kubectl --context addons-476957 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-476957 delete pod task-pv-pod-restore: (1.098697191s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-476957 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-476957 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-476957 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.604524899s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-476957 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-476957 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-476957 --alsologtostderr -v=1: (1.170385967s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-6m9xr" [0f4ee88f-c329-435f-9566-963d4fec6c39] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-6m9xr" [0f4ee88f-c329-435f-9566-963d4fec6c39] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009368102s
--- PASS: TestAddons/parallel/Headlamp (12.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-tb8lt" [8c0547ce-0a3b-4c25-b7fc-3eda0eb8ea91] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010887349s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-476957
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-476957 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-476957 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-476957
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-476957: (13.096895837s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-476957
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-476957
--- PASS: TestAddons/StoppedEnableDisable (13.27s)

                                                
                                    
x
+
TestCertOptions (87.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-636455 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0223 22:41:18.968747   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-636455 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m25.483359549s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-636455 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-636455 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-636455 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-636455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-636455
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-636455: (1.106161835s)
--- PASS: TestCertOptions (87.13s)

                                                
                                    
x
+
TestCertExpiration (292.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-541487 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-541487 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m24.63996176s)
E0223 22:39:51.882718   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:39:57.047941   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-541487 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0223 22:42:35.338743   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-541487 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (26.627969998s)
helpers_test.go:175: Cleaning up "cert-expiration-541487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-541487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-541487: (1.115866889s)
--- PASS: TestCertExpiration (292.39s)

                                                
                                    
x
+
TestDockerFlags (64.78s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-841884 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-841884 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m2.93409004s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-841884 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-841884 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-841884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-841884
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-841884: (1.402626425s)
--- PASS: TestDockerFlags (64.78s)

                                                
                                    
x
+
TestForceSystemdFlag (85.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-452533 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E0223 22:36:48.831090   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-452533 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m24.033514195s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-452533 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-452533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-452533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-452533: (1.241166256s)
--- PASS: TestForceSystemdFlag (85.55s)

                                                
                                    
x
+
TestForceSystemdEnv (66.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-876876 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-876876 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m4.761046668s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-876876 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-876876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-876876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-876876: (1.082600108s)
--- PASS: TestForceSystemdEnv (66.08s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.67s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.67s)

                                                
                                    
x
+
TestErrorSpam/setup (56.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-234727 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-234727 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-234727 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-234727 --driver=kvm2 : (56.484573479s)
--- PASS: TestErrorSpam/setup (56.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (12.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 stop: (12.432614079s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-234727 --log_dir /tmp/nospam-234727 stop
--- PASS: TestErrorSpam/stop (12.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/test/nested/copy/66927/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 start -p functional-053497 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 start -p functional-053497 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m14.692051179s)
--- PASS: TestFunctional/serial/StartWithProxy (74.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-linux-amd64 start -p functional-053497 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-linux-amd64 start -p functional-053497 --alsologtostderr -v=8: (42.754655363s)
functional_test.go:657: soft start took 42.755217149s for "functional-053497" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-053497 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 cache add k8s.gcr.io/pause:3.1: (1.798975654s)
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 cache add k8s.gcr.io/pause:3.3: (1.782392872s)
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 cache add k8s.gcr.io/pause:latest: (1.514344026s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-053497 /tmp/TestFunctionalserialCacheCmdcacheadd_local4008179487/001
functional_test.go:1083: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cache add minikube-local-cache-test:functional-053497
functional_test.go:1083: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 cache add minikube-local-cache-test:functional-053497: (1.173104658s)
functional_test.go:1088: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cache delete minikube-local-cache-test:functional-053497
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-053497
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (217.912045ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 cache reload: (1.148449499s)
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 kubectl -- --context functional-053497 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-053497 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-linux-amd64 start -p functional-053497 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0223 22:06:48.831706   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:48.837660   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:48.847888   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:48.868101   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:48.908387   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:48.988717   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:49.149138   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:49.469725   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:50.110661   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:51.391149   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:53.951990   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:06:59.072414   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:07:09.313348   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:07:29.794463   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-linux-amd64 start -p functional-053497 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.481409542s)
functional_test.go:755: restart took 49.481527223s for "functional-053497" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-053497 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 logs
functional_test.go:1230: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 logs: (1.142453987s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 logs --file /tmp/TestFunctionalserialLogsFileCmd2506798186/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 logs --file /tmp/TestFunctionalserialLogsFileCmd2506798186/001/logs.txt: (1.203978194s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 config get cpus: exit status 14 (72.14421ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 config get cpus: exit status 14 (64.562521ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-053497 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-053497 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 73088: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-linux-amd64 start -p functional-053497 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-053497 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (136.249948ms)

                                                
                                                
-- stdout --
	* [functional-053497] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:07:59.287689   72564 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:07:59.287821   72564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:07:59.287831   72564 out.go:309] Setting ErrFile to fd 2...
	I0223 22:07:59.287838   72564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:07:59.287941   72564 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:07:59.288482   72564 out.go:303] Setting JSON to false
	I0223 22:07:59.289356   72564 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6632,"bootTime":1677183448,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:07:59.289413   72564 start.go:135] virtualization: kvm guest
	I0223 22:07:59.291966   72564 out.go:177] * [functional-053497] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:07:59.293589   72564 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:07:59.293638   72564 notify.go:220] Checking for updates...
	I0223 22:07:59.295145   72564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:07:59.296679   72564 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:07:59.298129   72564 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 22:07:59.299512   72564 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:07:59.300902   72564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:07:59.302621   72564 config.go:182] Loaded profile config "functional-053497": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:07:59.303149   72564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:07:59.303234   72564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:07:59.317351   72564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0223 22:07:59.317744   72564 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:07:59.318283   72564 main.go:141] libmachine: Using API Version  1
	I0223 22:07:59.318304   72564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:07:59.318761   72564 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:07:59.318946   72564 main.go:141] libmachine: (functional-053497) Calling .DriverName
	I0223 22:07:59.319133   72564 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:07:59.319460   72564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:07:59.319495   72564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:07:59.333985   72564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I0223 22:07:59.334395   72564 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:07:59.334837   72564 main.go:141] libmachine: Using API Version  1
	I0223 22:07:59.334857   72564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:07:59.335162   72564 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:07:59.335368   72564 main.go:141] libmachine: (functional-053497) Calling .DriverName
	I0223 22:07:59.367351   72564 out.go:177] * Using the kvm2 driver based on existing profile
	I0223 22:07:59.368797   72564 start.go:296] selected driver: kvm2
	I0223 22:07:59.368812   72564 start.go:857] validating driver "kvm2" against &{Name:functional-053497 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-053497 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:07:59.368958   72564 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:07:59.371271   72564 out.go:177] 
	W0223 22:07:59.372957   72564 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 22:07:59.374446   72564 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-linux-amd64 start -p functional-053497 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 start -p functional-053497 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-053497 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (150.081841ms)

                                                
                                                
-- stdout --
	* [functional-053497] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:07:59.584164   72619 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:07:59.584354   72619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:07:59.584364   72619 out.go:309] Setting ErrFile to fd 2...
	I0223 22:07:59.584371   72619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:07:59.584553   72619 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:07:59.585101   72619 out.go:303] Setting JSON to false
	I0223 22:07:59.585996   72619 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6632,"bootTime":1677183448,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:07:59.586056   72619 start.go:135] virtualization: kvm guest
	I0223 22:07:59.588230   72619 out.go:177] * [functional-053497] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0223 22:07:59.590287   72619 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:07:59.590294   72619 notify.go:220] Checking for updates...
	I0223 22:07:59.591738   72619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:07:59.593379   72619 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	I0223 22:07:59.594876   72619 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	I0223 22:07:59.596352   72619 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:07:59.597749   72619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:07:59.599409   72619 config.go:182] Loaded profile config "functional-053497": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:07:59.599765   72619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:07:59.599827   72619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:07:59.614114   72619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0223 22:07:59.614556   72619 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:07:59.615178   72619 main.go:141] libmachine: Using API Version  1
	I0223 22:07:59.615224   72619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:07:59.615654   72619 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:07:59.615840   72619 main.go:141] libmachine: (functional-053497) Calling .DriverName
	I0223 22:07:59.616068   72619 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:07:59.616395   72619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:07:59.616461   72619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:07:59.630291   72619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0223 22:07:59.630815   72619 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:07:59.631347   72619 main.go:141] libmachine: Using API Version  1
	I0223 22:07:59.631379   72619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:07:59.631728   72619 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:07:59.631920   72619 main.go:141] libmachine: (functional-053497) Calling .DriverName
	I0223 22:07:59.662937   72619 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0223 22:07:59.664172   72619 start.go:296] selected driver: kvm2
	I0223 22:07:59.664186   72619 start.go:857] validating driver "kvm2" against &{Name:functional-053497 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-053497 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:07:59.664305   72619 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:07:59.666505   72619 out.go:177] 
	W0223 22:07:59.667820   72619 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 22:07:59.669222   72619 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 status
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-053497 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-053497 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-pxsvd" [7b59bb35-00eb-44f0-ae2d-0eb3ddb51825] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-pxsvd" [7b59bb35-00eb-44f0-ae2d-0eb3ddb51825] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.018496607s
functional_test.go:1617: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 service hello-node-connect --url
functional_test.go:1623: found endpoint for hello-node-connect: http://192.168.39.145:30315
functional_test.go:1643: http://192.168.39.145:30315: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-pxsvd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.145:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.145:30315
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c3c7a186-154c-47ed-a7d4-4a908a31d9fe] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010544388s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-053497 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-053497 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-053497 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-053497 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c3ae388-3d20-4036-a6cb-5b43e319e9ce] Pending
helpers_test.go:344: "sp-pod" [6c3ae388-3d20-4036-a6cb-5b43e319e9ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c3ae388-3d20-4036-a6cb-5b43e319e9ce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 34.014629759s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-053497 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-053497 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-053497 delete -f testdata/storage-provisioner/pod.yaml: (1.011763654s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-053497 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [63a15f26-cc42-45a1-aec7-0789a90d0573] Pending
helpers_test.go:344: "sp-pod" [63a15f26-cc42-45a1-aec7-0789a90d0573] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [63a15f26-cc42-45a1-aec7-0789a90d0573] Running
2023/02/23 22:08:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.012962523s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-053497 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh -n functional-053497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 cp functional-053497:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3358166548/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh -n functional-053497 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-053497 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-j95gp" [35529002-26e9-44ee-a275-61a7fbdb4042] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-j95gp" [35529002-26e9-44ee-a275-61a7fbdb4042] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.012156501s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;": exit status 1 (495.692851ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;": exit status 1 (354.607623ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0223 22:08:10.755506   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
functional_test.go:1772: (dbg) Run:  kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;": exit status 1 (140.345803ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-053497 exec mysql-888f84dd9-j95gp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/66927/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /etc/test/nested/copy/66927/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/66927.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /etc/ssl/certs/66927.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/66927.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /usr/share/ca-certificates/66927.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/669272.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /etc/ssl/certs/669272.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/669272.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /usr/share/ca-certificates/669272.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-053497 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 ssh "sudo systemctl is-active crio": exit status 1 (256.125696ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-053497 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-053497
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-053497
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-053497 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-053497 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-053497 | 98c164446a866 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-053497 | c481e163c4f79 | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-053497 image ls --format json:
[{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"c481e163c4f79a90c607691d70ec6dcfef32c83abca6a2882c7a47d95e3e26b8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-053497"],"size":"30"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91
f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"655999
99"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-053497"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"98c164446a866d0b34b5ce2fa7b16455f38992217e391ddb88e34550de330d42","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-053497"],"size":"1240000"},{"id":"deb04688c4a
3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-053497 image ls --format yaml:
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: c481e163c4f79a90c607691d70ec6dcfef32c83abca6a2882c7a47d95e3e26b8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-053497
size: "30"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-053497
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 ssh pgrep buildkitd: exit status 1 (217.151174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image build -t localhost/my-image:functional-053497 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image build -t localhost/my-image:functional-053497 testdata/build: (3.904690428s)
functional_test.go:317: (dbg) Stdout: out/minikube-linux-amd64 -p functional-053497 image build -t localhost/my-image:functional-053497 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 28bfd2e00430
Removing intermediate container 28bfd2e00430
---> a15d0a5f6abc
Step 3/3 : ADD content.txt /
---> 98c164446a86
Successfully built 98c164446a86
Successfully tagged localhost/my-image:functional-053497
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.246472054s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-053497
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-053497 docker-env) && out/minikube-linux-amd64 status -p functional-053497"
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-053497 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image load --daemon gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image load --daemon gcr.io/google-containers/addon-resizer:functional-053497: (4.11800437s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image load --daemon gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image load --daemon gcr.io/google-containers/addon-resizer:functional-053497: (2.342131535s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.169326429s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image load --daemon gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image load --daemon gcr.io/google-containers/addon-resizer:functional-053497: (4.263090814s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image save gcr.io/google-containers/addon-resizer:functional-053497 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image save gcr.io/google-containers/addon-resizer:functional-053497 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.365376s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 service list -o json
functional_test.go:1552: Took "501.060682ms" to run "out/minikube-linux-amd64 -p functional-053497 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image rm gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.643476737s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 image save --daemon gcr.io/google-containers/addon-resizer:functional-053497
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-053497 image save --daemon gcr.io/google-containers/addon-resizer:functional-053497: (3.206802682s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-053497
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1312: Took "268.134303ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1326: Took "49.484206ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1363: Took "375.049343ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1376: Took "129.146044ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-053497 /tmp/TestFunctionalparallelMountCmdany-port2258571675/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677190080913528767" to /tmp/TestFunctionalparallelMountCmdany-port2258571675/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677190080913528767" to /tmp/TestFunctionalparallelMountCmdany-port2258571675/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677190080913528767" to /tmp/TestFunctionalparallelMountCmdany-port2258571675/001/test-1677190080913528767
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.630679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 22:08 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 22:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 22:08 test-1677190080913528767
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh cat /mount-9p/test-1677190080913528767
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-053497 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [254936fb-1dcb-4e46-a10c-7a676b823cd7] Pending
helpers_test.go:344: "busybox-mount" [254936fb-1dcb-4e46-a10c-7a676b823cd7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [254936fb-1dcb-4e46-a10c-7a676b823cd7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [254936fb-1dcb-4e46-a10c-7a676b823cd7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.009153858s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-053497 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-053497 /tmp/TestFunctionalparallelMountCmdany-port2258571675/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-053497 /tmp/TestFunctionalparallelMountCmdspecific-port4169517087/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.56537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-053497 /tmp/TestFunctionalparallelMountCmdspecific-port4169517087/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-053497 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-053497 ssh "sudo umount -f /mount-9p": exit status 1 (264.012143ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-053497 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-053497 /tmp/TestFunctionalparallelMountCmdspecific-port4169517087/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-053497
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-053497
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-053497
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestGvisorAddon (340.18s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-703041 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-703041 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m28.233952271s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-703041 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0223 22:38:35.124166   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.129716   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.140041   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.160355   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.200670   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.281105   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.441616   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:35.762293   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:36.402758   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:37.683200   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:40.243971   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:38:45.365163   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-703041 cache add gcr.io/k8s-minikube/gvisor-addon:2: (26.400916985s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-703041 addons enable gvisor
E0223 22:38:55.606027   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-703041 addons enable gvisor: (4.11777345s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [3fdc0aeb-9d20-41be-99bf-81d7d7f22c2c] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.48457607s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-703041 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:78: (dbg) Run:  kubectl --context gvisor-703041 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [0aca2d95-8acb-409a-9bad-742b9a7c757e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-untrusted" [0aca2d95-8acb-409a-9bad-742b9a7c757e] Running
E0223 22:39:16.086880   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 16.010314648s
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [7fa16526-1469-42b5-b7d4-02d06902781e] Running
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.007235217s
gvisor_addon_test.go:91: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-703041
gvisor_addon_test.go:91: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-703041: (1m31.768415646s)
gvisor_addon_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-703041 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-703041 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m25.760151983s)
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [3fdc0aeb-9d20-41be-99bf-81d7d7f22c2c] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.019844555s
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [0aca2d95-8acb-409a-9bad-742b9a7c757e] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 5.007949528s
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [7fa16526-1469-42b5-b7d4-02d06902781e] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.007165775s
helpers_test.go:175: Cleaning up "gvisor-703041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-703041
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-703041: (1.442994377s)
--- PASS: TestGvisorAddon (340.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-316086
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-316086: (2.455491495s)
--- PASS: TestImageBuild/serial/NormalBuild (2.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-316086
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-316086: (1.513739s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-316086
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-316086
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (87.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-633033 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0223 22:09:32.676317   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-633033 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m27.328167749s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (87.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons enable ingress --alsologtostderr -v=5: (14.728978456s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-633033 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-633033 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.439241706s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-633033 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-633033 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a8268b33-09b7-429c-b6f4-115f54c73d8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a8268b33-09b7-429c-b6f4-115f54c73d8e] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.009860136s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-633033 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-633033 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-633033 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.186
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons disable ingress-dns --alsologtostderr -v=1: (1.847452843s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons disable ingress --alsologtostderr -v=1
E0223 22:11:48.831449   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-633033 addons disable ingress --alsologtostderr -v=1: (7.412206698s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-625099 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0223 22:12:16.516683   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:12:35.338498   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.343868   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.354117   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.374409   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.414687   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.495020   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.655447   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:35.976052   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:36.617146   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:37.897563   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:40.459373   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:45.580097   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:12:55.820659   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-625099 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m9.82354901s)
--- PASS: TestJSONOutput/start/Command (69.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-625099 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-625099 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-625099 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-625099 --output=json --user=testUser: (13.100218183s)
--- PASS: TestJSONOutput/stop/Command (13.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-631637 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-631637 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.167119ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"880cc32c-18d3-473e-ab72-3137d7810f87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-631637] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdafefd8-a818-4af8-8cda-55a7f14c3827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"1e0e3ea8-4969-4a55-9ffc-b5d00dfc9e3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f91d8c1-adc2-46a3-ae5d-86541df9cdea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig"}}
	{"specversion":"1.0","id":"509d4b4b-726d-403a-90fb-57a941f96a50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube"}}
	{"specversion":"1.0","id":"9389369d-c577-4cf9-a86d-29ea5ff07e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"804fc767-f9b2-434d-936a-00bdf04443ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7dce20f9-3514-4614-92ed-26a119185eae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-631637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-631637
--- PASS: TestErrorJSONOutput (0.43s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (114.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-623152 --driver=kvm2 
E0223 22:13:16.301815   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:13:57.262556   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-623152 --driver=kvm2 : (55.493170794s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-626470 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-626470 --driver=kvm2 : (56.248358223s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-623152
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-626470
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-626470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-626470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-626470: (1.01902751s)
helpers_test.go:175: Cleaning up "first-623152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-623152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-623152: (1.003646855s)
--- PASS: TestMinikubeProfile (114.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-606518 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0223 22:15:19.182901   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-606518 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.873717137s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-606518 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-606518 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (33.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-624665 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-624665 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (32.901336039s)
--- PASS: TestMountStart/serial/StartWithMountSecond (33.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-624665 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-624665 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.09s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-606518 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-606518 --alsologtostderr -v=5: (1.087054169s)
--- PASS: TestMountStart/serial/DeleteFirst (1.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-624665 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-624665 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-624665
E0223 22:16:14.560399   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:14.565695   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:14.575945   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:14.596259   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:14.636551   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:14.716870   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:14.877272   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:15.197835   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:15.838836   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-624665: (2.084791627s)
--- PASS: TestMountStart/serial/Stop (2.08s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-624665
E0223 22:16:17.119387   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:19.680498   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:24.800807   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:16:35.041065   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-624665: (25.618425975s)
--- PASS: TestMountStart/serial/RestartStopped (26.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-624665 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-624665 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773885 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0223 22:16:48.831639   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:16:55.521996   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:17:35.338940   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:17:36.483154   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:18:03.023278   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:18:58.403654   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773885 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m16.734596044s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-773885 -- rollout status deployment/busybox: (3.075957555s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-9b7sp -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-zscjg -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-9b7sp -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-zscjg -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-9b7sp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-zscjg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-9b7sp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-9b7sp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-zscjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773885 -- exec busybox-6b86dd6d48-zscjg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-773885 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-773885 -v 3 --alsologtostderr: (53.776725245s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.36s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp testdata/cp-test.txt multinode-773885:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885:/home/docker/cp-test.txt multinode-773885-m02:/home/docker/cp-test_multinode-773885_multinode-773885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m02 "sudo cat /home/docker/cp-test_multinode-773885_multinode-773885-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885:/home/docker/cp-test.txt multinode-773885-m03:/home/docker/cp-test_multinode-773885_multinode-773885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m03 "sudo cat /home/docker/cp-test_multinode-773885_multinode-773885-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp testdata/cp-test.txt multinode-773885-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt multinode-773885:/home/docker/cp-test_multinode-773885-m02_multinode-773885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885 "sudo cat /home/docker/cp-test_multinode-773885-m02_multinode-773885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt multinode-773885-m03:/home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m03 "sudo cat /home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp testdata/cp-test.txt multinode-773885-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt multinode-773885:/home/docker/cp-test_multinode-773885-m03_multinode-773885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885 "sudo cat /home/docker/cp-test_multinode-773885-m03_multinode-773885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt multinode-773885-m02:/home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 ssh -n multinode-773885-m02 "sudo cat /home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-773885 node stop m03: (3.084292582s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773885 status: exit status 7 (424.706243ms)

                                                
                                                
-- stdout --
	multinode-773885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773885-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-773885-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr: exit status 7 (427.766662ms)

                                                
                                                
-- stdout --
	multinode-773885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773885-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-773885-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:20:13.246630   80281 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:20:13.247157   80281 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:20:13.247202   80281 out.go:309] Setting ErrFile to fd 2...
	I0223 22:20:13.247219   80281 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:20:13.247470   80281 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:20:13.247814   80281 out.go:303] Setting JSON to false
	I0223 22:20:13.247865   80281 mustload.go:65] Loading cluster: multinode-773885
	I0223 22:20:13.248131   80281 notify.go:220] Checking for updates...
	I0223 22:20:13.248652   80281 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:20:13.248675   80281 status.go:255] checking status of multinode-773885 ...
	I0223 22:20:13.249173   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.249232   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.263565   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0223 22:20:13.263974   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.264514   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.264536   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.264999   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.265155   80281 main.go:141] libmachine: (multinode-773885) Calling .GetState
	I0223 22:20:13.266801   80281 status.go:330] multinode-773885 host status = "Running" (err=<nil>)
	I0223 22:20:13.266819   80281 host.go:66] Checking if "multinode-773885" exists ...
	I0223 22:20:13.267091   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.267126   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.280991   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0223 22:20:13.281352   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.281736   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.281757   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.282060   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.282198   80281 main.go:141] libmachine: (multinode-773885) Calling .GetIP
	I0223 22:20:13.284940   80281 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:20:13.285332   80281 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:16:59 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:20:13.285365   80281 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:20:13.285482   80281 host.go:66] Checking if "multinode-773885" exists ...
	I0223 22:20:13.285863   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.285938   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.300021   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I0223 22:20:13.300448   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.300909   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.300931   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.301226   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.301408   80281 main.go:141] libmachine: (multinode-773885) Calling .DriverName
	I0223 22:20:13.301586   80281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:20:13.301606   80281 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
	I0223 22:20:13.304082   80281 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:20:13.304453   80281 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:16:59 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
	I0223 22:20:13.304484   80281 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
	I0223 22:20:13.304634   80281 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
	I0223 22:20:13.304811   80281 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
	I0223 22:20:13.304961   80281 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
	I0223 22:20:13.305085   80281 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
	I0223 22:20:13.398562   80281 ssh_runner.go:195] Run: systemctl --version
	I0223 22:20:13.404683   80281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:20:13.417978   80281 kubeconfig.go:92] found "multinode-773885" server: "https://192.168.39.240:8443"
	I0223 22:20:13.418015   80281 api_server.go:165] Checking apiserver status ...
	I0223 22:20:13.418048   80281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:20:13.429547   80281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup
	I0223 22:20:13.438595   80281 api_server.go:181] apiserver freezer: "8:freezer:/kubepods/burstable/pode9459d167995578fa153c781fb0ec958/6a41aad93299981274f8fe5ca403afe397c0a9ee387a413413351eeed1d9128e"
	I0223 22:20:13.438644   80281 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pode9459d167995578fa153c781fb0ec958/6a41aad93299981274f8fe5ca403afe397c0a9ee387a413413351eeed1d9128e/freezer.state
	I0223 22:20:13.447369   80281 api_server.go:203] freezer state: "THAWED"
	I0223 22:20:13.447399   80281 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0223 22:20:13.452031   80281 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0223 22:20:13.452052   80281 status.go:421] multinode-773885 apiserver status = Running (err=<nil>)
	I0223 22:20:13.452073   80281 status.go:257] multinode-773885 status: &{Name:multinode-773885 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:20:13.452093   80281 status.go:255] checking status of multinode-773885-m02 ...
	I0223 22:20:13.452381   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.452422   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.467000   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0223 22:20:13.467415   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.467841   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.467865   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.468158   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.468324   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
	I0223 22:20:13.469677   80281 status.go:330] multinode-773885-m02 host status = "Running" (err=<nil>)
	I0223 22:20:13.469694   80281 host.go:66] Checking if "multinode-773885-m02" exists ...
	I0223 22:20:13.469949   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.469993   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.483865   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38039
	I0223 22:20:13.484197   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.484619   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.484643   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.484942   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.485111   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
	I0223 22:20:13.487739   80281 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:20:13.488153   80281 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:18:19 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:20:13.488177   80281 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:20:13.488322   80281 host.go:66] Checking if "multinode-773885-m02" exists ...
	I0223 22:20:13.488711   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.488757   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.503045   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0223 22:20:13.503382   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.503828   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.503849   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.504132   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.504293   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
	I0223 22:20:13.504447   80281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:20:13.504477   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
	I0223 22:20:13.507200   80281 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:20:13.507619   80281 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:18:19 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
	I0223 22:20:13.507660   80281 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
	I0223 22:20:13.507756   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
	I0223 22:20:13.507910   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
	I0223 22:20:13.508059   80281 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
	I0223 22:20:13.508226   80281 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
	I0223 22:20:13.598940   80281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:20:13.611611   80281 status.go:257] multinode-773885-m02 status: &{Name:multinode-773885-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:20:13.611647   80281 status.go:255] checking status of multinode-773885-m03 ...
	I0223 22:20:13.612013   80281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:20:13.612064   80281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:20:13.626696   80281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I0223 22:20:13.627102   80281 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:20:13.627638   80281 main.go:141] libmachine: Using API Version  1
	I0223 22:20:13.627671   80281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:20:13.627997   80281 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:20:13.628183   80281 main.go:141] libmachine: (multinode-773885-m03) Calling .GetState
	I0223 22:20:13.629629   80281 status.go:330] multinode-773885-m03 host status = "Stopped" (err=<nil>)
	I0223 22:20:13.629651   80281 status.go:343] host is not running, skipping remaining checks
	I0223 22:20:13.629656   80281 status.go:257] multinode-773885-m03 status: &{Name:multinode-773885-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.94s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-773885 node start m03 --alsologtostderr: (30.40732705s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (112.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 stop
E0223 22:23:11.879518   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-773885 stop: (1m52.008184114s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773885 status: exit status 7 (89.912534ms)

                                                
                                                
-- stdout --
	multinode-773885
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-773885-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr: exit status 7 (81.280913ms)

                                                
                                                
-- stdout --
	multinode-773885
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-773885-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:24:34.268755   81253 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:24:34.268858   81253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:24:34.268866   81253 out.go:309] Setting ErrFile to fd 2...
	I0223 22:24:34.268871   81253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:24:34.268972   81253 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
	I0223 22:24:34.269123   81253 out.go:303] Setting JSON to false
	I0223 22:24:34.269157   81253 mustload.go:65] Loading cluster: multinode-773885
	I0223 22:24:34.269504   81253 notify.go:220] Checking for updates...
	I0223 22:24:34.270411   81253 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:24:34.270514   81253 status.go:255] checking status of multinode-773885 ...
	I0223 22:24:34.271352   81253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:24:34.271419   81253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:24:34.285448   81253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0223 22:24:34.285920   81253 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:24:34.286568   81253 main.go:141] libmachine: Using API Version  1
	I0223 22:24:34.286590   81253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:24:34.287011   81253 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:24:34.287223   81253 main.go:141] libmachine: (multinode-773885) Calling .GetState
	I0223 22:24:34.288604   81253 status.go:330] multinode-773885 host status = "Stopped" (err=<nil>)
	I0223 22:24:34.288621   81253 status.go:343] host is not running, skipping remaining checks
	I0223 22:24:34.288629   81253 status.go:257] multinode-773885 status: &{Name:multinode-773885 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:24:34.288650   81253 status.go:255] checking status of multinode-773885-m02 ...
	I0223 22:24:34.288979   81253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0223 22:24:34.289022   81253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 22:24:34.302910   81253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0223 22:24:34.303218   81253 main.go:141] libmachine: () Calling .GetVersion
	I0223 22:24:34.303641   81253 main.go:141] libmachine: Using API Version  1
	I0223 22:24:34.303665   81253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 22:24:34.303920   81253 main.go:141] libmachine: () Calling .GetMachineName
	I0223 22:24:34.304059   81253 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
	I0223 22:24:34.305428   81253 status.go:330] multinode-773885-m02 host status = "Stopped" (err=<nil>)
	I0223 22:24:34.305444   81253 status.go:343] host is not running, skipping remaining checks
	I0223 22:24:34.305452   81253 status.go:257] multinode-773885-m02 status: &{Name:multinode-773885-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (112.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (105.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773885 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0223 22:26:14.560557   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773885 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m44.972373665s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773885 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (105.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (58.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-773885
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773885-m02 --driver=kvm2 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-773885-m02 --driver=kvm2 : exit status 14 (67.175138ms)

                                                
                                                
-- stdout --
	* [multinode-773885-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-773885-m02' is duplicated with machine name 'multinode-773885-m02' in profile 'multinode-773885'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773885-m03 --driver=kvm2 
E0223 22:26:48.831544   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773885-m03 --driver=kvm2 : (56.653571375s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-773885
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-773885: exit status 80 (215.210501ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-773885
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-773885-m03 already exists in multinode-773885-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-773885-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-773885-m03: (1.045782779s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (58.03s)

                                                
                                    
x
+
TestPreload (167.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-611558 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0223 22:27:35.338272   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-611558 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m23.038093333s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-611558 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-611558 -- docker pull gcr.io/k8s-minikube/busybox: (1.422558879s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-611558
E0223 22:28:58.384542   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-611558: (13.099943965s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-611558 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-611558 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m8.966610669s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-611558 -- docker images
helpers_test.go:175: Cleaning up "test-preload-611558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-611558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-611558: (1.07814669s)
--- PASS: TestPreload (167.85s)

                                                
                                    
x
+
TestScheduledStopUnix (128.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-256667 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-256667 --memory=2048 --driver=kvm2 : (56.746335161s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256667 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-256667 -n scheduled-stop-256667
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256667 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256667 --cancel-scheduled
E0223 22:31:14.561599   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256667 -n scheduled-stop-256667
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-256667
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256667 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0223 22:31:48.831633   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-256667
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-256667: exit status 7 (65.869431ms)

                                                
                                                
-- stdout --
	scheduled-stop-256667
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256667 -n scheduled-stop-256667
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256667 -n scheduled-stop-256667: exit status 7 (63.322197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-256667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-256667
--- PASS: TestScheduledStopUnix (128.41s)

                                                
                                    
x
+
TestSkaffold (88.75s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe199137704 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-383159 --memory=2600 --driver=kvm2 
E0223 22:32:35.338759   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:32:37.605347   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-383159 --memory=2600 --driver=kvm2 : (54.077562455s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe199137704 run --minikube-profile skaffold-383159 --kube-context skaffold-383159 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe199137704 run --minikube-profile skaffold-383159 --kube-context skaffold-383159 --status-check=true --port-forward=false --interactive=false: (22.94975682s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-76479dbf9d-4k68f" [c1a4d308-381d-4a54-a06b-4f936a80af50] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.019720208s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6bfc667b57-ptzgh" [fa125de6-4e31-49fc-b20c-a650d1902967] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00679842s
helpers_test.go:175: Cleaning up "skaffold-383159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-383159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-383159: (1.10041329s)
--- PASS: TestSkaffold (88.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (166.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.1832472806.exe start -p running-upgrade-922238 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.1832472806.exe start -p running-upgrade-922238 --memory=2200 --vm-driver=kvm2 : (1m41.052678231s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-922238 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-922238 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m4.064802936s)
helpers_test.go:175: Cleaning up "running-upgrade-922238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-922238
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-922238: (1.15413982s)
--- PASS: TestRunningBinaryUpgrade (166.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (220.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m40.953464784s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-541731
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-541731: (3.116085683s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-541731 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-541731 status --format={{.Host}}: exit status 7 (78.662056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 : (48.088318602s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-541731 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (115.354044ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-541731] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-541731
	    minikube start -p kubernetes-upgrade-541731 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5417312 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-541731 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-541731 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 : (1m6.54482272s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-541731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-541731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-541731: (1.132682848s)
--- PASS: TestKubernetesUpgrade (220.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (188.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.6.2.3407801337.exe start -p stopped-upgrade-406705 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.6.2.3407801337.exe start -p stopped-upgrade-406705 --memory=2200 --vm-driver=kvm2 : (1m44.241791462s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.6.2.3407801337.exe -p stopped-upgrade-406705 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.6.2.3407801337.exe -p stopped-upgrade-406705 stop: (13.090287971s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-406705 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-406705 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m10.731501887s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (188.06s)

                                                
                                    
x
+
TestPause/serial/Start (113.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-548672 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0223 22:36:14.560504   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-548672 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m53.363182159s)
--- PASS: TestPause/serial/Start (113.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-406705
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-406705: (1.104693389s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790462 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-790462 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (70.697997ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-790462] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790462 --driver=kvm2 
E0223 22:37:35.338553   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790462 --driver=kvm2 : (1m36.42952771s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-790462 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (68.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-548672 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-548672 --alsologtostderr -v=1 --driver=kvm2 : (1m8.6075008s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (68.63s)

                                                
                                    
x
+
TestPause/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-548672 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-548672 --alsologtostderr -v=5: (2.344039176s)
--- PASS: TestPause/serial/Pause (2.34s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-548672 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-548672 --output=json --layout=cluster: exit status 2 (268.22686ms)

                                                
                                                
-- stdout --
	{"Name":"pause-548672","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-548672","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-548672 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-548672 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790462 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790462 --no-kubernetes --driver=kvm2 : (6.58514337s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-790462 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-790462 status -o json: exit status 2 (250.414561ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-790462","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-790462
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-790462: (1.152631193s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-548672 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-548672 --alsologtostderr -v=5: (1.115518808s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790462 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790462 --no-kubernetes --driver=kvm2 : (50.456947133s)
--- PASS: TestNoKubernetes/serial/Start (50.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-790462 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-790462 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.099112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (66.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1m5.682052916s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.142602259s)
--- PASS: TestNoKubernetes/serial/ProfileList (66.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-790462
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-790462: (2.120501476s)
--- PASS: TestNoKubernetes/serial/Stop (2.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790462 --driver=kvm2 
E0223 22:41:14.560069   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790462 --driver=kvm2 : (28.625054538s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-790462 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-790462 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.446654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0223 22:41:48.831680   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m40.048991727s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m28.068507472s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (128.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m8.542519276s)
--- PASS: TestNetworkPlugins/group/calico/Start (128.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (122.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m2.745401263s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (122.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-wl2lv" [332e93a8-215c-4168-a9a4-467e9c12a8c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-wl2lv" [332e93a8-215c-4168-a9a4-467e9c12a8c7] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010598567s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (95.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0223 22:43:55.973683   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:55.978978   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:55.989279   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:56.009581   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:56.049962   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:56.130313   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:56.290774   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:56.611407   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:57.252220   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:43:58.533068   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:44:01.093445   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:44:02.809729   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m35.225890747s)
--- PASS: TestNetworkPlugins/group/false/Start (95.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ljgsn" [aae7bf86-c795-40cc-b710-69a34a409158] Running
E0223 22:44:06.213639   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.132948131s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bhpq7" [87c40a09-1635-4a2e-b501-ca24b6373b14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 22:44:16.454575   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-bhpq7" [87c40a09-1635-4a2e-b501-ca24b6373b14] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.008352585s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (126.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m6.730528519s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (126.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q78kz" [ffa5f95d-3e85-4ccc-8d55-cca4f57f8411] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.025060324s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vd4fz" [6e0a0c61-9725-416f-b3ed-e3fb3a5390f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-vd4fz" [6e0a0c61-9725-416f-b3ed-e3fb3a5390f2] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.013883561s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-wtc6z" [864f7b98-48e6-4b59-b95a-5b96c963516e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-wtc6z" [864f7b98-48e6-4b59-b95a-5b96c963516e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.008754892s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-nn5jr" [7ca74a1f-0266-4b41-b4db-0bec28bf22c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-nn5jr" [7ca74a1f-0266-4b41-b4db-0bec28bf22c6] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.01069163s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m30.57774303s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (106.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0223 22:45:38.385262   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m46.176907646s)
--- PASS: TestNetworkPlugins/group/bridge/Start (106.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (113.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0223 22:46:14.560554   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:46:39.816655   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:46:48.831151   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-409320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m53.914128554s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (113.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-tswws" [c88a8a76-0672-46fa-b652-26c6259cec9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-tswws" [c88a8a76-0672-46fa-b652-26c6259cec9c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.014259221s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8b6sw" [3816995e-f933-4fdb-9484-e7bf11948e82] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017507077s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-c6dt6" [65a80d8d-5399-4a7d-b990-8e4d6103216e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-c6dt6" [65a80d8d-5399-4a7d-b990-8e4d6103216e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.012774482s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-rfr98" [315a5052-905c-4fef-a109-199e74a98bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-rfr98" [315a5052-905c-4fef-a109-199e74a98bfc] Running
E0223 22:47:35.338290   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.009479177s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (150.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-856279 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-856279 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m30.303094363s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (150.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-093896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-093896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1: (1m57.900577352s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-409320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-409320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7lf7b" [801027e9-dbe5-4788-88d3-b656d850ab79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7lf7b" [801027e9-dbe5-4788-88d3-b656d850ab79] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.009658135s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (141.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-752018 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-752018 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1: (2m21.503350547s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (141.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-409320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-409320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)
E0223 22:56:48.831679   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:56:53.225151   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:57:03.309851   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:57:20.911200   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:57:22.576975   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:57:30.992671   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:57:35.338087   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (114.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-699715 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1
E0223 22:48:21.081381   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.086718   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.096964   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.117270   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.157551   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.237882   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.398322   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:21.718918   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:22.359840   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:23.641049   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:26.201261   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:31.322408   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:35.124040   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:48:41.562828   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:48:55.973300   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:49:02.043751   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:49:05.464243   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:05.469560   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:05.479863   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:05.500167   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:05.540531   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:05.620952   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:05.781467   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:06.102284   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:06.742585   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:08.023330   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:10.584153   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:15.704387   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:49:17.606425   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:49:23.657114   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:49:25.945136   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-699715 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1: (1m54.301608851s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (114.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-093896 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5bc422de-a62f-4579-bb57-ea53aef44b9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5bc422de-a62f-4579-bb57-ea53aef44b9c] Running
E0223 22:49:43.004711   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:49:46.425591   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.029068439s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-093896 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-093896 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-093896 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-093896 --alsologtostderr -v=3
E0223 22:49:52.513176   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:52.518465   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:52.528737   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:52.549046   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:52.589452   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:52.669950   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:52.830359   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:53.150785   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:53.791112   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:55.071610   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:49:57.631766   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-093896 --alsologtostderr -v=3: (13.127752423s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-856279 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [851d870a-31e0-4d17-8bae-f461a8b461c4] Pending
helpers_test.go:344: "busybox" [851d870a-31e0-4d17-8bae-f461a8b461c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [851d870a-31e0-4d17-8bae-f461a8b461c4] Running
E0223 22:50:02.752012   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:50:02.982432   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:02.987732   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:02.998045   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:03.018380   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:03.058716   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:03.139088   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:03.299618   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:03.620197   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:04.260705   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:05.541318   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:08.101795   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.024893485s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-856279 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093896 -n no-preload-093896
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093896 -n no-preload-093896: exit status 7 (80.733265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-093896 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (613.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-093896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-093896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1: (10m13.033399605s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093896 -n no-preload-093896
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (613.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-856279 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-856279 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-856279 --alsologtostderr -v=3
E0223 22:50:12.992979   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:50:13.222464   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-856279 --alsologtostderr -v=3: (13.144638622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-699715 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29c4acfc-a2aa-4d4a-a114-9103567d23b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29c4acfc-a2aa-4d4a-a114-9103567d23b0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.029687264s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-699715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-752018 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fcdfc23c-7570-4163-811b-bfc69ea5f584] Pending
helpers_test.go:344: "busybox" [fcdfc23c-7570-4163-811b-bfc69ea5f584] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fcdfc23c-7570-4163-811b-bfc69ea5f584] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.036080266s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-752018 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-856279 -n old-k8s-version-856279
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-856279 -n old-k8s-version-856279: exit status 7 (65.520037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-856279 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (459.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-856279 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0223 22:50:23.463630   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-856279 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m39.36334848s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-856279 -n old-k8s-version-856279
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (459.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-699715 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-699715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-699715 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-699715 --alsologtostderr -v=3: (13.14622463s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-752018 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-752018 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-752018 --alsologtostderr -v=3
E0223 22:50:26.830780   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:26.836059   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:26.847195   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:26.868135   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:26.909130   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:26.989634   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:27.149794   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:27.386080   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:50:27.470378   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:28.110891   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:29.391808   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:31.952719   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:50:33.473290   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:50:37.073508   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-752018 --alsologtostderr -v=3: (13.126764012s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715: exit status 7 (89.554248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-699715 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-699715 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-699715 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1: (5m46.754891577s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752018 -n embed-certs-752018
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752018 -n embed-certs-752018: exit status 7 (65.438861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-752018 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-752018 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1
E0223 22:50:43.943968   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:50:47.314205   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:51:04.925008   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:51:07.794827   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:51:14.434122   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:51:14.560506   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:51:24.904769   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:51:48.755361   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:51:48.831605   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:51:49.307156   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:51:53.225693   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.231007   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.241261   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.261560   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.301846   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.382543   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.542802   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:53.863956   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:54.505179   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:55.785629   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:51:58.346444   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:52:03.310101   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.315369   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.325631   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.345915   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.386228   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.466566   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.466635   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:52:03.626738   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:03.947418   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:04.588336   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:05.868810   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:08.429218   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:13.549498   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:13.708885   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:52:22.576982   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:22.582242   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:22.592436   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:22.613424   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:22.653708   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:22.734027   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:22.894332   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:23.214934   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:23.789653   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:23.855868   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:25.137086   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:27.697989   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:32.818268   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:34.190038   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:52:35.338829   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
E0223 22:52:36.354314   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:52:43.058496   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:52:44.270663   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:52:46.825553   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:52:50.439071   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:50.444320   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:50.454550   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:50.474780   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:50.515115   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:50.595395   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:50.755857   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:51.076212   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:51.716473   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:52.997340   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:52:55.557811   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:53:00.678428   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:53:03.539182   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:53:10.675968   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:53:10.919376   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:53:15.150456   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:53:21.082213   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:53:25.231605   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:53:31.400442   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:53:35.124253   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:53:44.499704   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:53:48.766233   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/auto-409320/client.crt: no such file or directory
E0223 22:53:55.972810   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
E0223 22:54:05.464589   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:54:12.361249   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:54:33.148383   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kindnet-409320/client.crt: no such file or directory
E0223 22:54:37.070611   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/enable-default-cni-409320/client.crt: no such file or directory
E0223 22:54:47.152490   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/flannel-409320/client.crt: no such file or directory
E0223 22:54:52.514231   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:54:58.170385   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/skaffold-383159/client.crt: no such file or directory
E0223 22:55:02.983127   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:55:06.420558   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:55:20.195509   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/calico-409320/client.crt: no such file or directory
E0223 22:55:26.830043   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:55:30.666202   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/custom-flannel-409320/client.crt: no such file or directory
E0223 22:55:34.282144   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
E0223 22:55:54.516831   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
E0223 22:56:14.560575   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-752018 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1: (5m35.574140356s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752018 -n embed-certs-752018
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-6cxcz" [3c79cfa1-ac50-4c21-aa7e-d139040f6c1e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019967813s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-6cxcz" [3c79cfa1-ac50-4c21-aa7e-d139040f6c1e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015668855s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-752018 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wdpnm" [427510aa-101e-401c-99f7-c2d84536cdbc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016715561s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-752018 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-752018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752018 -n embed-certs-752018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752018 -n embed-certs-752018: exit status 2 (251.921733ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-752018 -n embed-certs-752018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-752018 -n embed-certs-752018: exit status 2 (247.642074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-752018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752018 -n embed-certs-752018
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-752018 -n embed-certs-752018
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (76.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-813019 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-813019 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1: (1m16.555771518s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (76.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wdpnm" [427510aa-101e-401c-99f7-c2d84536cdbc] Running
E0223 22:56:31.882893   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008869484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-699715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-699715 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-699715 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715: exit status 2 (266.085175ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715: exit status 2 (264.920089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-699715 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-699715 -n default-k8s-diff-port-699715
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-813019 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-813019 --alsologtostderr -v=3
E0223 22:57:50.260937   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/bridge-409320/client.crt: no such file or directory
E0223 22:57:50.439703   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/kubenet-409320/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-813019 --alsologtostderr -v=3: (13.119549588s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-813019 -n newest-cni-813019
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-813019 -n newest-cni-813019: exit status 7 (65.960415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-813019 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-813019 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-813019 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1: (47.026468449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-813019 -n newest-cni-813019
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9djsk" [5ebde763-95a3-4924-bd9b-55d13753f860] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017328339s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9djsk" [5ebde763-95a3-4924-bd9b-55d13753f860] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00885361s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-856279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-856279 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-856279 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-856279 -n old-k8s-version-856279
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-856279 -n old-k8s-version-856279: exit status 2 (256.710545ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-856279 -n old-k8s-version-856279
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-856279 -n old-k8s-version-856279: exit status 2 (251.466003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-856279 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-856279 -n old-k8s-version-856279
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-856279 -n old-k8s-version-856279
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-813019 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-813019 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-813019 -n newest-cni-813019
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-813019 -n newest-cni-813019: exit status 2 (239.370067ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-813019 -n newest-cni-813019
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-813019 -n newest-cni-813019: exit status 2 (235.31927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-813019 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-813019 -n newest-cni-813019
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-813019 -n newest-cni-813019
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8b25g" [574b3890-9bab-40c9-8bd4-321d07fb65b8] Running
E0223 23:00:16.347383   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/default-k8s-diff-port-699715/client.crt: no such file or directory
E0223 23:00:18.846530   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/old-k8s-version-856279/client.crt: no such file or directory
E0223 23:00:18.907871   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/default-k8s-diff-port-699715/client.crt: no such file or directory
E0223 23:00:19.017476   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/gvisor-703041/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016205034s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8b25g" [574b3890-9bab-40c9-8bd4-321d07fb65b8] Running
E0223 23:00:24.028837   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/default-k8s-diff-port-699715/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006935923s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-093896 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-093896 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-093896 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093896 -n no-preload-093896
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093896 -n no-preload-093896: exit status 2 (229.771679ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-093896 -n no-preload-093896
E0223 23:00:26.830187   66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/false-409320/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-093896 -n no-preload-093896: exit status 2 (231.892238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-093896 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093896 -n no-preload-093896
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-093896 -n no-preload-093896
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.41s)

                                                
                                    

Test skip (29/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-409320 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-409320" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-409320

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-409320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409320"

                                                
                                                
----------------------- debugLogs end: cilium-409320 [took: 3.931696614s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-409320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-409320
--- SKIP: TestNetworkPlugins/group/cilium (4.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-325616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-325616
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard