Test Report: KVM_Linux 17233

                    
                      bcf36281e7a7fa8f06366be9b7645ae5ef101314:2023-09-12:30983
                    
                

Test fail (2/317)

Order failed test Duration
214 TestMultiNode/serial/RestartKeepsNodes 117
215 TestMultiNode/serial/DeleteNode 3.08
x
+
TestMultiNode/serial/RestartKeepsNodes (117s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-348977
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-348977
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-348977: (27.768511913s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-348977 --wait=true -v=8 --alsologtostderr
E0912 18:42:14.122543   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:42:41.806546   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-348977 --wait=true -v=8 --alsologtostderr: exit status 90 (1m26.86288845s)

                                                
                                                
-- stdout --
	* [multinode-348977] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-348977 in cluster multinode-348977
	* Restarting existing kvm2 VM for "multinode-348977" ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-348977-m02 in cluster multinode-348977
	* Restarting existing kvm2 VM for "multinode-348977-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.209
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:41:25.667613   25774 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:41:25.667734   25774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:41:25.667744   25774 out.go:309] Setting ErrFile to fd 2...
	I0912 18:41:25.667751   25774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:41:25.667992   25774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:41:25.668537   25774 out.go:303] Setting JSON to false
	I0912 18:41:25.675371   25774 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1436,"bootTime":1694542650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:41:25.675445   25774 start.go:138] virtualization: kvm guest
	I0912 18:41:25.677679   25774 out.go:177] * [multinode-348977] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:41:25.679064   25774 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:41:25.679068   25774 notify.go:220] Checking for updates...
	I0912 18:41:25.680532   25774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:41:25.681821   25774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:25.683123   25774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:41:25.684315   25774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:41:25.685748   25774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:41:25.687862   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:41:25.687948   25774 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:41:25.688376   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:41:25.688437   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:41:25.702321   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0912 18:41:25.702721   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:41:25.703222   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:41:25.703249   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:41:25.703593   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:41:25.703770   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:25.738031   25774 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 18:41:25.739353   25774 start.go:298] selected driver: kvm2
	I0912 18:41:25.739367   25774 start.go:902] validating driver "kvm2" against &{Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:41:25.739535   25774 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:41:25.739863   25774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:41:25.739952   25774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3674/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:41:25.754342   25774 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:41:25.755022   25774 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 18:41:25.755067   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:41:25.755081   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:41:25.755090   25774 start_flags.go:321] config:
	{Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0912 18:41:25.755284   25774 iso.go:125] acquiring lock: {Name:mk43b7bcf1553c61ec6315fe7159639653246bdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:41:25.757119   25774 out.go:177] * Starting control plane node multinode-348977 in cluster multinode-348977
	I0912 18:41:25.758385   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:41:25.758412   25774 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0912 18:41:25.758419   25774 cache.go:57] Caching tarball of preloaded images
	I0912 18:41:25.758521   25774 preload.go:174] Found /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:41:25.758535   25774 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:41:25.758693   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:41:25.758878   25774 start.go:365] acquiring machines lock for multinode-348977: {Name:mkb814e9f5e9709f943ea910e0cc7d91215dc74f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:41:25.758921   25774 start.go:369] acquired machines lock for "multinode-348977" in 23.43µs
	I0912 18:41:25.758937   25774 start.go:96] Skipping create...Using existing machine configuration
	I0912 18:41:25.758946   25774 fix.go:54] fixHost starting: 
	I0912 18:41:25.759194   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:41:25.759230   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:41:25.772820   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0912 18:41:25.773260   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:41:25.773721   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:41:25.773744   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:41:25.774050   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:41:25.774207   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:25.774351   25774 main.go:141] libmachine: (multinode-348977) Calling .GetState
	I0912 18:41:25.776006   25774 fix.go:102] recreateIfNeeded on multinode-348977: state=Stopped err=<nil>
	I0912 18:41:25.776027   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	W0912 18:41:25.776184   25774 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 18:41:25.778169   25774 out.go:177] * Restarting existing kvm2 VM for "multinode-348977" ...
	I0912 18:41:25.779423   25774 main.go:141] libmachine: (multinode-348977) Calling .Start
	I0912 18:41:25.779594   25774 main.go:141] libmachine: (multinode-348977) Ensuring networks are active...
	I0912 18:41:25.780345   25774 main.go:141] libmachine: (multinode-348977) Ensuring network default is active
	I0912 18:41:25.780685   25774 main.go:141] libmachine: (multinode-348977) Ensuring network mk-multinode-348977 is active
	I0912 18:41:25.780989   25774 main.go:141] libmachine: (multinode-348977) Getting domain xml...
	I0912 18:41:25.781706   25774 main.go:141] libmachine: (multinode-348977) Creating domain...
	I0912 18:41:26.979765   25774 main.go:141] libmachine: (multinode-348977) Waiting to get IP...
	I0912 18:41:26.980558   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:26.980870   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:26.980946   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:26.980855   25804 retry.go:31] will retry after 279.689815ms: waiting for machine to come up
	I0912 18:41:27.262432   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.262870   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.262898   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.262825   25804 retry.go:31] will retry after 258.456262ms: waiting for machine to come up
	I0912 18:41:27.523376   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.523770   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.523792   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.523714   25804 retry.go:31] will retry after 470.938004ms: waiting for machine to come up
	I0912 18:41:27.996320   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.996767   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.996795   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.996720   25804 retry.go:31] will retry after 597.246886ms: waiting for machine to come up
	I0912 18:41:28.595108   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:28.595555   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:28.595588   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:28.595492   25804 retry.go:31] will retry after 568.569691ms: waiting for machine to come up
	I0912 18:41:29.165136   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:29.165526   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:29.165568   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:29.165431   25804 retry.go:31] will retry after 758.578505ms: waiting for machine to come up
	I0912 18:41:29.925242   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:29.925603   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:29.925635   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:29.925546   25804 retry.go:31] will retry after 859.704183ms: waiting for machine to come up
	I0912 18:41:30.786642   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:30.786967   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:30.787004   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:30.786922   25804 retry.go:31] will retry after 1.183485789s: waiting for machine to come up
	I0912 18:41:31.972095   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:31.972538   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:31.972559   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:31.972512   25804 retry.go:31] will retry after 1.429607271s: waiting for machine to come up
	I0912 18:41:33.403618   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:33.403985   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:33.404016   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:33.403933   25804 retry.go:31] will retry after 1.93373353s: waiting for machine to come up
	I0912 18:41:35.340062   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:35.340437   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:35.340468   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:35.340385   25804 retry.go:31] will retry after 2.736938727s: waiting for machine to come up
	I0912 18:41:38.080033   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:38.080374   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:38.080419   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:38.080335   25804 retry.go:31] will retry after 3.047877472s: waiting for machine to come up
	I0912 18:41:41.129305   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:41.129731   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:41.129764   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:41.129706   25804 retry.go:31] will retry after 4.362757487s: waiting for machine to come up
	I0912 18:41:45.497217   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.497673   25774 main.go:141] libmachine: (multinode-348977) Found IP for machine: 192.168.39.209
	I0912 18:41:45.497700   25774 main.go:141] libmachine: (multinode-348977) Reserving static IP address...
	I0912 18:41:45.497715   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has current primary IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.498115   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "multinode-348977", mac: "52:54:00:38:2d:65", ip: "192.168.39.209"} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.498148   25774 main.go:141] libmachine: (multinode-348977) DBG | skip adding static IP to network mk-multinode-348977 - found existing host DHCP lease matching {name: "multinode-348977", mac: "52:54:00:38:2d:65", ip: "192.168.39.209"}
	I0912 18:41:45.498157   25774 main.go:141] libmachine: (multinode-348977) Reserved static IP address: 192.168.39.209
	I0912 18:41:45.498169   25774 main.go:141] libmachine: (multinode-348977) Waiting for SSH to be available...
	I0912 18:41:45.498196   25774 main.go:141] libmachine: (multinode-348977) DBG | Getting to WaitForSSH function...
	I0912 18:41:45.500347   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.500695   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.500725   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.500838   25774 main.go:141] libmachine: (multinode-348977) DBG | Using SSH client type: external
	I0912 18:41:45.500863   25774 main.go:141] libmachine: (multinode-348977) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa (-rw-------)
	I0912 18:41:45.500903   25774 main.go:141] libmachine: (multinode-348977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:41:45.500923   25774 main.go:141] libmachine: (multinode-348977) DBG | About to run SSH command:
	I0912 18:41:45.500939   25774 main.go:141] libmachine: (multinode-348977) DBG | exit 0
	I0912 18:41:45.586419   25774 main.go:141] libmachine: (multinode-348977) DBG | SSH cmd err, output: <nil>: 
	I0912 18:41:45.586834   25774 main.go:141] libmachine: (multinode-348977) Calling .GetConfigRaw
	I0912 18:41:45.587556   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:45.589868   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.590371   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.590417   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.590668   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:41:45.590853   25774 machine.go:88] provisioning docker machine ...
	I0912 18:41:45.590870   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:45.591068   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.591255   25774 buildroot.go:166] provisioning hostname "multinode-348977"
	I0912 18:41:45.591275   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.591470   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.593702   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.594074   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.594103   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.594218   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.594383   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.594509   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.594632   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.594781   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:45.595274   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:45.595295   25774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348977 && echo "multinode-348977" | sudo tee /etc/hostname
	I0912 18:41:45.722017   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348977
	
	I0912 18:41:45.722046   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.724726   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.725094   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.725130   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.725251   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.725458   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.725619   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.725761   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.725916   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:45.726274   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:45.726292   25774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348977/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:41:45.842816   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:41:45.842841   25774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3674/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3674/.minikube}
	I0912 18:41:45.842857   25774 buildroot.go:174] setting up certificates
	I0912 18:41:45.842865   25774 provision.go:83] configureAuth start
	I0912 18:41:45.842874   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.843162   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:45.845880   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.846268   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.846304   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.846423   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.848394   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.848724   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.848757   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.848868   25774 provision.go:138] copyHostCerts
	I0912 18:41:45.848897   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:41:45.848925   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem, removing ...
	I0912 18:41:45.848930   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:41:45.848994   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem (1078 bytes)
	I0912 18:41:45.849111   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:41:45.849132   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem, removing ...
	I0912 18:41:45.849136   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:41:45.849173   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem (1123 bytes)
	I0912 18:41:45.849235   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:41:45.849258   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem, removing ...
	I0912 18:41:45.849267   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:41:45.849293   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem (1675 bytes)
	I0912 18:41:45.849363   25774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem org=jenkins.multinode-348977 san=[192.168.39.209 192.168.39.209 localhost 127.0.0.1 minikube multinode-348977]
	I0912 18:41:45.937349   25774 provision.go:172] copyRemoteCerts
	I0912 18:41:45.937398   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:41:45.937443   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.940144   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.940452   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.940478   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.940646   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.940826   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.941012   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.941161   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:46.028317   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 18:41:46.028387   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 18:41:46.051259   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 18:41:46.051345   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 18:41:46.073514   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 18:41:46.073587   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 18:41:46.094769   25774 provision.go:86] duration metric: configureAuth took 251.89397ms
	I0912 18:41:46.094791   25774 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:41:46.095009   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:41:46.095035   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:46.095303   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.097707   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.098061   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.098087   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.098202   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.098375   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.098520   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.098678   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.098851   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.099151   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.099166   25774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 18:41:46.212162   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 18:41:46.212182   25774 buildroot.go:70] root file system type: tmpfs
	I0912 18:41:46.212298   25774 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 18:41:46.212318   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.214891   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.215233   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.215263   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.215455   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.215642   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.215791   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.215920   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.216075   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.216522   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.216627   25774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 18:41:46.339328   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 18:41:46.339371   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.341974   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.342333   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.342373   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.342551   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.342746   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.342899   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.343025   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.343217   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.343656   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.343688   25774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 18:41:47.205623   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 18:41:47.205652   25774 machine.go:91] provisioned docker machine in 1.61478511s
	I0912 18:41:47.205663   25774 start.go:300] post-start starting for "multinode-348977" (driver="kvm2")
	I0912 18:41:47.205676   25774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:41:47.205694   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.205995   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:41:47.206022   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.208743   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.209079   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.209103   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.209248   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.209422   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.209594   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.209743   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.295876   25774 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:41:47.299689   25774 command_runner.go:130] > NAME=Buildroot
	I0912 18:41:47.299703   25774 command_runner.go:130] > VERSION=2021.02.12-1-gaa74cea-dirty
	I0912 18:41:47.299708   25774 command_runner.go:130] > ID=buildroot
	I0912 18:41:47.299713   25774 command_runner.go:130] > VERSION_ID=2021.02.12
	I0912 18:41:47.299717   25774 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0912 18:41:47.299906   25774 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:41:47.299927   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/addons for local assets ...
	I0912 18:41:47.299995   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/files for local assets ...
	I0912 18:41:47.300083   25774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> 108482.pem in /etc/ssl/certs
	I0912 18:41:47.300095   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /etc/ssl/certs/108482.pem
	I0912 18:41:47.300182   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 18:41:47.307891   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:41:47.330135   25774 start.go:303] post-start completed in 124.459565ms
	I0912 18:41:47.330151   25774 fix.go:56] fixHost completed within 21.57120518s
	I0912 18:41:47.330168   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.332212   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.332586   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.332620   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.332750   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.332956   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.333101   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.333254   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.333426   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:47.333724   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:47.333735   25774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 18:41:47.443376   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694544107.413556881
	
	I0912 18:41:47.443399   25774 fix.go:206] guest clock: 1694544107.413556881
	I0912 18:41:47.443409   25774 fix.go:219] Guest: 2023-09-12 18:41:47.413556881 +0000 UTC Remote: 2023-09-12 18:41:47.330154345 +0000 UTC m=+21.694086344 (delta=83.402536ms)
	I0912 18:41:47.443449   25774 fix.go:190] guest clock delta is within tolerance: 83.402536ms
	I0912 18:41:47.443457   25774 start.go:83] releasing machines lock for "multinode-348977", held for 21.684524313s
	I0912 18:41:47.443482   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.443730   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:47.446097   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.446567   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.446617   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.446750   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447397   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447575   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447653   25774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:41:47.447692   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.447780   25774 ssh_runner.go:195] Run: cat /version.json
	I0912 18:41:47.447796   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.450306   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450547   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450692   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.450723   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450860   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.451014   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.451041   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.451051   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.451131   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.451226   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.451300   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.451366   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.451417   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.451543   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.556478   25774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 18:41:47.556535   25774 command_runner.go:130] > {"iso_version": "v1.31.0-1694081706-17207", "kicbase_version": "v0.0.40-1693218425-17145", "minikube_version": "v1.31.2", "commit": "1e9174da326b681d7488cd5fad4145a637e5f218"}
	I0912 18:41:47.556664   25774 ssh_runner.go:195] Run: systemctl --version
	I0912 18:41:47.562789   25774 command_runner.go:130] > systemd 247 (247)
	I0912 18:41:47.562819   25774 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0912 18:41:47.563174   25774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 18:41:47.568635   25774 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 18:41:47.568673   25774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:41:47.568744   25774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:41:47.583069   25774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0912 18:41:47.583097   25774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:41:47.583106   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:41:47.583215   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:41:47.600326   25774 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0912 18:41:47.600503   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:41:47.610473   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:41:47.620204   25774 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:41:47.620270   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:41:47.629827   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:41:47.639220   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:41:47.648528   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:41:47.657904   25774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:41:47.668248   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:41:47.678277   25774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:41:47.686527   25774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 18:41:47.686608   25774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:41:47.695327   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:47.797158   25774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:41:47.812584   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:41:47.812657   25774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 18:41:47.826911   25774 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0912 18:41:47.826933   25774 command_runner.go:130] > [Unit]
	I0912 18:41:47.826944   25774 command_runner.go:130] > Description=Docker Application Container Engine
	I0912 18:41:47.826952   25774 command_runner.go:130] > Documentation=https://docs.docker.com
	I0912 18:41:47.826960   25774 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0912 18:41:47.826969   25774 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0912 18:41:47.826978   25774 command_runner.go:130] > StartLimitBurst=3
	I0912 18:41:47.826986   25774 command_runner.go:130] > StartLimitIntervalSec=60
	I0912 18:41:47.826998   25774 command_runner.go:130] > [Service]
	I0912 18:41:47.827006   25774 command_runner.go:130] > Type=notify
	I0912 18:41:47.827015   25774 command_runner.go:130] > Restart=on-failure
	I0912 18:41:47.827032   25774 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0912 18:41:47.827055   25774 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0912 18:41:47.827069   25774 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0912 18:41:47.827082   25774 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0912 18:41:47.827095   25774 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0912 18:41:47.827109   25774 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0912 18:41:47.827127   25774 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0912 18:41:47.827144   25774 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0912 18:41:47.827160   25774 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0912 18:41:47.827169   25774 command_runner.go:130] > ExecStart=
	I0912 18:41:47.827195   25774 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0912 18:41:47.827210   25774 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0912 18:41:47.827222   25774 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0912 18:41:47.827234   25774 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0912 18:41:47.827246   25774 command_runner.go:130] > LimitNOFILE=infinity
	I0912 18:41:47.827266   25774 command_runner.go:130] > LimitNPROC=infinity
	I0912 18:41:47.827278   25774 command_runner.go:130] > LimitCORE=infinity
	I0912 18:41:47.827289   25774 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0912 18:41:47.827299   25774 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0912 18:41:47.827311   25774 command_runner.go:130] > TasksMax=infinity
	I0912 18:41:47.827322   25774 command_runner.go:130] > TimeoutStartSec=0
	I0912 18:41:47.827336   25774 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0912 18:41:47.827346   25774 command_runner.go:130] > Delegate=yes
	I0912 18:41:47.827356   25774 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0912 18:41:47.827363   25774 command_runner.go:130] > KillMode=process
	I0912 18:41:47.827369   25774 command_runner.go:130] > [Install]
	I0912 18:41:47.827385   25774 command_runner.go:130] > WantedBy=multi-user.target
	I0912 18:41:47.827455   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:41:47.849101   25774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 18:41:47.865230   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:41:47.877782   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:41:47.890445   25774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:41:47.918932   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:41:47.930773   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:41:47.947116   25774 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0912 18:41:47.947200   25774 ssh_runner.go:195] Run: which cri-dockerd
	I0912 18:41:47.950521   25774 command_runner.go:130] > /usr/bin/cri-dockerd
	I0912 18:41:47.950648   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 18:41:47.958320   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 18:41:47.973919   25774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 18:41:48.073799   25774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 18:41:48.184968   25774 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 18:41:48.185002   25774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 18:41:48.201823   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:48.299993   25774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 18:41:49.744586   25774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.444560977s)
	I0912 18:41:49.744655   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:41:49.846098   25774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 18:41:49.958418   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:41:50.061865   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:50.173855   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 18:41:50.189825   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:50.290635   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0912 18:41:50.371946   25774 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 18:41:50.372017   25774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 18:41:50.377756   25774 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0912 18:41:50.377774   25774 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 18:41:50.377781   25774 command_runner.go:130] > Device: 16h/22d	Inode: 849         Links: 1
	I0912 18:41:50.377800   25774 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0912 18:41:50.377809   25774 command_runner.go:130] > Access: 2023-09-12 18:41:50.278241513 +0000
	I0912 18:41:50.377817   25774 command_runner.go:130] > Modify: 2023-09-12 18:41:50.278241513 +0000
	I0912 18:41:50.377825   25774 command_runner.go:130] > Change: 2023-09-12 18:41:50.281246019 +0000
	I0912 18:41:50.377831   25774 command_runner.go:130] >  Birth: -
	I0912 18:41:50.377939   25774 start.go:537] Will wait 60s for crictl version
	I0912 18:41:50.377991   25774 ssh_runner.go:195] Run: which crictl
	I0912 18:41:50.381510   25774 command_runner.go:130] > /usr/bin/crictl
	I0912 18:41:50.381786   25774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 18:41:50.424429   25774 command_runner.go:130] > Version:  0.1.0
	I0912 18:41:50.424452   25774 command_runner.go:130] > RuntimeName:  docker
	I0912 18:41:50.424460   25774 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0912 18:41:50.424466   25774 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0912 18:41:50.424724   25774 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0912 18:41:50.424789   25774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 18:41:50.450690   25774 command_runner.go:130] > 24.0.6
	I0912 18:41:50.450956   25774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 18:41:50.476349   25774 command_runner.go:130] > 24.0.6
	I0912 18:41:50.479420   25774 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0912 18:41:50.479460   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:50.482063   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:50.482385   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:50.482420   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:50.482563   25774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 18:41:50.486347   25774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:41:50.497696   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:41:50.497741   25774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 18:41:50.517010   25774 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0912 18:41:50.517029   25774 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 18:41:50.517037   25774 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0912 18:41:50.517049   25774 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0912 18:41:50.517055   25774 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0912 18:41:50.517062   25774 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0912 18:41:50.517072   25774 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0912 18:41:50.517083   25774 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0912 18:41:50.517094   25774 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:41:50.517105   25774 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0912 18:41:50.517185   25774 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0912 18:41:50.517206   25774 docker.go:566] Images already preloaded, skipping extraction
	I0912 18:41:50.517258   25774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 18:41:50.536618   25774 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0912 18:41:50.536638   25774 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0912 18:41:50.536646   25774 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0912 18:41:50.536655   25774 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 18:41:50.536663   25774 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0912 18:41:50.536670   25774 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0912 18:41:50.536682   25774 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0912 18:41:50.536688   25774 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0912 18:41:50.536697   25774 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:41:50.536704   25774 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0912 18:41:50.536730   25774 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0912 18:41:50.536746   25774 cache_images.go:84] Images are preloaded, skipping loading
	I0912 18:41:50.536845   25774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 18:41:50.562638   25774 command_runner.go:130] > cgroupfs
	I0912 18:41:50.562909   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:41:50.562928   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:41:50.562949   25774 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 18:41:50.562976   25774 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-348977 NodeName:multinode-348977 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 18:41:50.563124   25774 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-348977"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 18:41:50.563209   25774 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-348977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 18:41:50.563267   25774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 18:41:50.572176   25774 command_runner.go:130] > kubeadm
	I0912 18:41:50.572195   25774 command_runner.go:130] > kubectl
	I0912 18:41:50.572202   25774 command_runner.go:130] > kubelet
	I0912 18:41:50.572227   25774 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 18:41:50.572284   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 18:41:50.580040   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0912 18:41:50.596061   25774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 18:41:50.611346   25774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0912 18:41:50.627820   25774 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0912 18:41:50.631401   25774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:41:50.643788   25774 certs.go:56] Setting up /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977 for IP: 192.168.39.209
	I0912 18:41:50.643818   25774 certs.go:190] acquiring lock for shared ca certs: {Name:mk2421757d3f1bd81d42ecb091844bc5771a96da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:41:50.643980   25774 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key
	I0912 18:41:50.644020   25774 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key
	I0912 18:41:50.644084   25774 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key
	I0912 18:41:50.644164   25774 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key.c475731a
	I0912 18:41:50.644203   25774 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key
	I0912 18:41:50.644214   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 18:41:50.644226   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 18:41:50.644237   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 18:41:50.644251   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 18:41:50.644263   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 18:41:50.644276   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 18:41:50.644288   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 18:41:50.644299   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 18:41:50.644353   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem (1338 bytes)
	W0912 18:41:50.644381   25774 certs.go:433] ignoring /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848_empty.pem, impossibly tiny 0 bytes
	I0912 18:41:50.644391   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 18:41:50.644411   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem (1078 bytes)
	I0912 18:41:50.644433   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem (1123 bytes)
	I0912 18:41:50.644454   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem (1675 bytes)
	I0912 18:41:50.644488   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:41:50.644515   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.644528   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem -> /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.644540   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.645043   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 18:41:50.669387   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 18:41:50.693124   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 18:41:50.717344   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 18:41:50.741383   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 18:41:50.765260   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 18:41:50.788458   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 18:41:50.812054   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 18:41:50.834458   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 18:41:50.856384   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem --> /usr/share/ca-certificates/10848.pem (1338 bytes)
	I0912 18:41:50.879015   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /usr/share/ca-certificates/108482.pem (1708 bytes)
	I0912 18:41:50.901340   25774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 18:41:50.916988   25774 ssh_runner.go:195] Run: openssl version
	I0912 18:41:50.922215   25774 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0912 18:41:50.922272   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 18:41:50.931174   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935308   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935555   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935591   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.940768   25774 command_runner.go:130] > b5213941
	I0912 18:41:50.940828   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 18:41:50.949933   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10848.pem && ln -fs /usr/share/ca-certificates/10848.pem /etc/ssl/certs/10848.pem"
	I0912 18:41:50.959171   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963298   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 18:25 /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963387   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 18:25 /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963424   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.968525   25774 command_runner.go:130] > 51391683
	I0912 18:41:50.968577   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10848.pem /etc/ssl/certs/51391683.0"
	I0912 18:41:50.977420   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108482.pem && ln -fs /usr/share/ca-certificates/108482.pem /etc/ssl/certs/108482.pem"
	I0912 18:41:50.986499   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990623   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 18:25 /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990805   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 18:25 /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990843   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.996022   25774 command_runner.go:130] > 3ec20f2e
	I0912 18:41:50.996068   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/108482.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 18:41:51.005086   25774 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 18:41:51.009100   25774 command_runner.go:130] > ca.crt
	I0912 18:41:51.009117   25774 command_runner.go:130] > ca.key
	I0912 18:41:51.009124   25774 command_runner.go:130] > healthcheck-client.crt
	I0912 18:41:51.009131   25774 command_runner.go:130] > healthcheck-client.key
	I0912 18:41:51.009139   25774 command_runner.go:130] > peer.crt
	I0912 18:41:51.009144   25774 command_runner.go:130] > peer.key
	I0912 18:41:51.009151   25774 command_runner.go:130] > server.crt
	I0912 18:41:51.009160   25774 command_runner.go:130] > server.key
	I0912 18:41:51.009343   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 18:41:51.014870   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.014914   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 18:41:51.020073   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.020297   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 18:41:51.025479   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.025531   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 18:41:51.030928   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.030980   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 18:41:51.036454   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.036501   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 18:41:51.041830   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.041883   25774 kubeadm.go:404] StartCluster: {Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:41:51.041992   25774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 18:41:51.060635   25774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 18:41:51.069462   25774 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0912 18:41:51.069486   25774 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0912 18:41:51.069493   25774 command_runner.go:130] > /var/lib/minikube/etcd:
	I0912 18:41:51.069497   25774 command_runner.go:130] > member
	I0912 18:41:51.069726   25774 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0912 18:41:51.069759   25774 kubeadm.go:636] restartCluster start
	I0912 18:41:51.069810   25774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 18:41:51.077935   25774 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.078333   25774 kubeconfig.go:135] verify returned: extract IP: "multinode-348977" does not appear in /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:51.078445   25774 kubeconfig.go:146] "multinode-348977" context is missing from /home/jenkins/minikube-integration/17233-3674/kubeconfig - will repair!
	I0912 18:41:51.078733   25774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3674/kubeconfig: {Name:mked094375583bdbe55c31d17add6f22f93c8430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:41:51.079119   25774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:51.079379   25774 kapi.go:59] client config for multinode-348977: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key", CAFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 18:41:51.079954   25774 cert_rotation.go:137] Starting client certificate rotation controller
	I0912 18:41:51.080075   25774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 18:41:51.088002   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.088038   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.098117   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.098131   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.098157   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.109116   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.609873   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.610041   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.622441   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:52.110086   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:52.110176   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:52.121246   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:52.609915   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:52.609995   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:52.621769   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:53.109288   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:53.109378   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:53.120512   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:53.610143   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:53.610216   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:53.621426   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:54.110052   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:54.110138   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:54.121104   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:54.609667   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:54.609758   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:54.621600   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:55.110219   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:55.110305   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:55.121464   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:55.610086   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:55.610163   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:55.623327   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:56.109204   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:56.109279   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:56.120664   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:56.609222   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:56.609302   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:56.620802   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:57.109386   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:57.109488   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:57.120886   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:57.609416   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:57.609490   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:57.621348   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:58.109961   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:58.110033   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:58.121759   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:58.609284   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:58.609358   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:58.620494   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:59.110173   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:59.110270   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:59.121513   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:59.610149   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:59.610234   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:59.621495   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:00.110117   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:42:00.110197   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:42:00.121328   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:00.609937   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:42:00.610014   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:42:00.621204   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:01.088910   25774 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0912 18:42:01.088951   25774 kubeadm.go:1128] stopping kube-system containers ...
	I0912 18:42:01.089019   25774 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 18:42:01.110509   25774 command_runner.go:130] > 43aaf5c3bf6e
	I0912 18:42:01.110528   25774 command_runner.go:130] > 012e61091353
	I0912 18:42:01.110532   25774 command_runner.go:130] > 96a48d1e6808
	I0912 18:42:01.110536   25774 command_runner.go:130] > d9fcb5b50176
	I0912 18:42:01.110541   25774 command_runner.go:130] > 5486463296b7
	I0912 18:42:01.110545   25774 command_runner.go:130] > 7791e737cea3
	I0912 18:42:01.110548   25774 command_runner.go:130] > 1e31cfd643be
	I0912 18:42:01.110552   25774 command_runner.go:130] > 061d1cef513d
	I0912 18:42:01.110557   25774 command_runner.go:130] > 5253cfd31af0
	I0912 18:42:01.110564   25774 command_runner.go:130] > ff41c9b085ad
	I0912 18:42:01.110569   25774 command_runner.go:130] > c0587efa38db
	I0912 18:42:01.110575   25774 command_runner.go:130] > 3627cce96a10
	I0912 18:42:01.110581   25774 command_runner.go:130] > 14cac5d320ea
	I0912 18:42:01.110587   25774 command_runner.go:130] > a0de152dc98d
	I0912 18:42:01.110602   25774 command_runner.go:130] > 7fabc68ca233
	I0912 18:42:01.110615   25774 command_runner.go:130] > e113d197f01f
	I0912 18:42:01.110643   25774 docker.go:462] Stopping containers: [43aaf5c3bf6e 012e61091353 96a48d1e6808 d9fcb5b50176 5486463296b7 7791e737cea3 1e31cfd643be 061d1cef513d 5253cfd31af0 ff41c9b085ad c0587efa38db 3627cce96a10 14cac5d320ea a0de152dc98d 7fabc68ca233 e113d197f01f]
	I0912 18:42:01.110731   25774 ssh_runner.go:195] Run: docker stop 43aaf5c3bf6e 012e61091353 96a48d1e6808 d9fcb5b50176 5486463296b7 7791e737cea3 1e31cfd643be 061d1cef513d 5253cfd31af0 ff41c9b085ad c0587efa38db 3627cce96a10 14cac5d320ea a0de152dc98d 7fabc68ca233 e113d197f01f
	I0912 18:42:01.135827   25774 command_runner.go:130] > 43aaf5c3bf6e
	I0912 18:42:01.135850   25774 command_runner.go:130] > 012e61091353
	I0912 18:42:01.135857   25774 command_runner.go:130] > 96a48d1e6808
	I0912 18:42:01.135864   25774 command_runner.go:130] > d9fcb5b50176
	I0912 18:42:01.135871   25774 command_runner.go:130] > 5486463296b7
	I0912 18:42:01.135876   25774 command_runner.go:130] > 7791e737cea3
	I0912 18:42:01.135880   25774 command_runner.go:130] > 1e31cfd643be
	I0912 18:42:01.135883   25774 command_runner.go:130] > 061d1cef513d
	I0912 18:42:01.135898   25774 command_runner.go:130] > 5253cfd31af0
	I0912 18:42:01.135905   25774 command_runner.go:130] > ff41c9b085ad
	I0912 18:42:01.135914   25774 command_runner.go:130] > c0587efa38db
	I0912 18:42:01.135928   25774 command_runner.go:130] > 3627cce96a10
	I0912 18:42:01.135941   25774 command_runner.go:130] > 14cac5d320ea
	I0912 18:42:01.135948   25774 command_runner.go:130] > a0de152dc98d
	I0912 18:42:01.135954   25774 command_runner.go:130] > 7fabc68ca233
	I0912 18:42:01.135961   25774 command_runner.go:130] > e113d197f01f
	I0912 18:42:01.137164   25774 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 18:42:01.153212   25774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 18:42:01.162065   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0912 18:42:01.162089   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0912 18:42:01.162097   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0912 18:42:01.162104   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:42:01.162134   25774 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:42:01.162182   25774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 18:42:01.171349   25774 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0912 18:42:01.171377   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:01.289145   25774 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 18:42:01.289913   25774 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0912 18:42:01.290490   25774 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0912 18:42:01.291144   25774 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 18:42:01.292064   25774 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0912 18:42:01.292637   25774 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0912 18:42:01.293503   25774 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0912 18:42:01.294172   25774 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0912 18:42:01.294833   25774 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0912 18:42:01.295368   25774 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 18:42:01.296031   25774 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 18:42:01.297818   25774 command_runner.go:130] > [certs] Using the existing "sa" key
	I0912 18:42:01.298323   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.545462   25774 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 18:42:02.545494   25774 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 18:42:02.545504   25774 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 18:42:02.545512   25774 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 18:42:02.545520   25774 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 18:42:02.545606   25774 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2472468s)
	I0912 18:42:02.545640   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.733506   25774 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 18:42:02.733549   25774 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 18:42:02.733559   25774 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0912 18:42:02.733582   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.804619   25774 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 18:42:02.804641   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 18:42:02.809567   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 18:42:02.811026   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 18:42:02.816366   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.884578   25774 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 18:42:02.888100   25774 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:42:02.888191   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:02.904156   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:03.418205   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:03.918314   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:04.418376   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:04.917697   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:05.011521   25774 command_runner.go:130] > 1613
	I0912 18:42:05.012117   25774 api_server.go:72] duration metric: took 2.124014474s to wait for apiserver process to appear ...
	I0912 18:42:05.012146   25774 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:42:05.012167   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:05.012754   25774 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I0912 18:42:05.012783   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:05.013807   25774 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I0912 18:42:05.514175   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.050114   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 18:42:08.050147   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 18:42:08.050161   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.082430   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 18:42:08.082459   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 18:42:08.513959   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.519156   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 18:42:08.519185   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 18:42:09.014802   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:09.019756   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 18:42:09.019792   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 18:42:09.514302   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:09.520638   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:09.520736   25774 round_trippers.go:463] GET https://192.168.39.209:8443/version
	I0912 18:42:09.520751   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:09.520764   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:09.520778   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:09.528622   25774 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 18:42:09.528643   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:09.528652   25774 round_trippers.go:580]     Audit-Id: d7171996-093f-43cf-b1f6-28902f5d151b
	I0912 18:42:09.528659   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:09.528666   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:09.528674   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:09.528683   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:09.528692   25774 round_trippers.go:580]     Content-Length: 263
	I0912 18:42:09.528702   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:09 GMT
	I0912 18:42:09.528733   25774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 18:42:09.528825   25774 api_server.go:141] control plane version: v1.28.1
	I0912 18:42:09.528843   25774 api_server.go:131] duration metric: took 4.516689082s to wait for apiserver health ...
	I0912 18:42:09.528854   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:42:09.528863   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:42:09.530468   25774 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 18:42:09.531871   25774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 18:42:09.537270   25774 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0912 18:42:09.537291   25774 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0912 18:42:09.537299   25774 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0912 18:42:09.537309   25774 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 18:42:09.537318   25774 command_runner.go:130] > Access: 2023-09-12 18:41:39.002927530 +0000
	I0912 18:42:09.537330   25774 command_runner.go:130] > Modify: 2023-09-07 15:52:17.000000000 +0000
	I0912 18:42:09.537339   25774 command_runner.go:130] > Change: 2023-09-12 18:41:36.512921513 +0000
	I0912 18:42:09.537346   25774 command_runner.go:130] >  Birth: -
	I0912 18:42:09.537468   25774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 18:42:09.537490   25774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 18:42:09.570387   25774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 18:42:11.089548   25774 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0912 18:42:11.089572   25774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0912 18:42:11.089578   25774 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0912 18:42:11.089583   25774 command_runner.go:130] > daemonset.apps/kindnet configured
	I0912 18:42:11.089601   25774 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.519192244s)
	I0912 18:42:11.089624   25774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:42:11.089698   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:11.089707   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.089714   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.089720   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.094477   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.094497   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.094506   25774 round_trippers.go:580]     Audit-Id: 28e1b4f7-27a4-4728-9259-012beb5aa7e7
	I0912 18:42:11.094513   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.094519   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.094529   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.094536   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.094545   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.097562   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84576 chars]
	I0912 18:42:11.101281   25774 system_pods.go:59] 12 kube-system pods found
	I0912 18:42:11.101307   25774 system_pods.go:61] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 18:42:11.101315   25774 system_pods.go:61] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 18:42:11.101319   25774 system_pods.go:61] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:11.101324   25774 system_pods.go:61] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:11.101329   25774 system_pods.go:61] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0912 18:42:11.101335   25774 system_pods.go:61] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 18:42:11.101344   25774 system_pods.go:61] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 18:42:11.101349   25774 system_pods.go:61] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:11.101354   25774 system_pods.go:61] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:11.101359   25774 system_pods.go:61] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 18:42:11.101365   25774 system_pods.go:61] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 18:42:11.101374   25774 system_pods.go:61] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 18:42:11.101381   25774 system_pods.go:74] duration metric: took 11.751351ms to wait for pod list to return data ...
	I0912 18:42:11.101392   25774 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:42:11.101439   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:11.101446   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.101454   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.101459   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.105805   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.105819   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.105827   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.105841   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.105847   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.105852   25774 round_trippers.go:580]     Audit-Id: ee581856-dce8-447b-8358-f37a47339ad8
	I0912 18:42:11.105857   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.105862   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.106297   25774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13670 chars]
	I0912 18:42:11.106975   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.106994   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107003   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.107007   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107011   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.107014   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107018   25774 node_conditions.go:105] duration metric: took 5.622718ms to run NodePressure ...
	I0912 18:42:11.107031   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:11.464902   25774 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0912 18:42:11.464923   25774 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0912 18:42:11.464949   25774 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0912 18:42:11.465044   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0912 18:42:11.465058   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.465069   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.465075   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.467829   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.467850   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.467860   25774 round_trippers.go:580]     Audit-Id: 78f00d5a-7eb8-4a9e-b90d-d323283aff0d
	I0912 18:42:11.467868   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.467874   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.467879   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.467884   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.467890   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.468461   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"776"},"items":[{"metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"762","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I0912 18:42:11.469818   25774 kubeadm.go:787] kubelet initialised
	I0912 18:42:11.469838   25774 kubeadm.go:788] duration metric: took 4.877378ms waiting for restarted kubelet to initialise ...
	I0912 18:42:11.469845   25774 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:11.469907   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:11.469918   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.469928   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.469935   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.473358   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:11.473371   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.473376   25774 round_trippers.go:580]     Audit-Id: 1e9316d5-4fa8-4920-a611-94538e5de9d2
	I0912 18:42:11.473382   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.473390   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.473395   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.473400   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.473405   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.475084   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"776"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84576 chars]
	I0912 18:42:11.477518   25774 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.477580   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:11.477588   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.477595   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.477600   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.480258   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.480275   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.480281   25774 round_trippers.go:580]     Audit-Id: 26d11606-643c-4680-9fdd-7c6079a0b9d0
	I0912 18:42:11.480287   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.480292   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.480297   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.480302   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.480307   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.480652   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:11.481023   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.481034   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.481040   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.481046   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.483036   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.483082   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.483091   25774 round_trippers.go:580]     Audit-Id: e8a416f7-552b-4b46-8961-5851964b96f3
	I0912 18:42:11.483096   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.483102   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.483111   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.483120   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.483142   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.483383   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.483638   25774 pod_ready.go:97] node "multinode-348977" hosting pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.483651   25774 pod_ready.go:81] duration metric: took 6.116525ms waiting for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.483658   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.483665   25774 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.483707   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348977
	I0912 18:42:11.483714   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.483721   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.483726   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.485560   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.485573   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.485578   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.485583   25774 round_trippers.go:580]     Audit-Id: 4142a2e4-b055-4e71-a505-4b1655dfe4ed
	I0912 18:42:11.485588   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.485593   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.485598   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.485604   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.485740   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"762","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I0912 18:42:11.486046   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.486056   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.486063   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.486069   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.487681   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.487700   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.487708   25774 round_trippers.go:580]     Audit-Id: 183de2f6-83fd-4668-8968-150418f82b3c
	I0912 18:42:11.487716   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.487725   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.487731   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.487739   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.487744   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.488007   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.488347   25774 pod_ready.go:97] node "multinode-348977" hosting pod "etcd-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.488365   25774 pod_ready.go:81] duration metric: took 4.694293ms waiting for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.488375   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "etcd-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.488396   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.488451   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348977
	I0912 18:42:11.488461   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.488472   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.488485   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.490841   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.490854   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.490860   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.490865   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.490870   25774 round_trippers.go:580]     Audit-Id: d2933280-f99b-440c-a008-24b2a483ce04
	I0912 18:42:11.490875   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.490880   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.490885   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.491841   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348977","namespace":"kube-system","uid":"f540dfd0-b1d9-4e3f-b9ab-f02db770e920","resourceVersion":"763","creationTimestamp":"2023-09-12T18:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.209:8443","kubernetes.io/config.hash":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.mirror":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.seen":"2023-09-12T18:38:05.461231178Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I0912 18:42:11.492324   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.492342   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.492359   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.492373   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.494392   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.494403   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.494409   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.494414   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.494419   25774 round_trippers.go:580]     Audit-Id: 2b3e9b5c-ec6e-47ef-8eb4-42325dd1cadd
	I0912 18:42:11.494425   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.494430   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.494435   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.494702   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.495039   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-apiserver-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.495057   25774 pod_ready.go:81] duration metric: took 6.649671ms waiting for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.495064   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-apiserver-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.495070   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.495114   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348977
	I0912 18:42:11.495121   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.495127   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.495134   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.496898   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.496911   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.496927   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.496938   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.496951   25774 round_trippers.go:580]     Audit-Id: 0f5e0750-0fcd-4f41-abcd-d523c6aae03a
	I0912 18:42:11.496960   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.496973   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.496986   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.497810   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348977","namespace":"kube-system","uid":"930d0357-f21e-4a4e-8c3b-2cff3263568f","resourceVersion":"764","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.mirror":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.seen":"2023-09-12T18:37:56.784236763Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I0912 18:42:11.498183   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.498196   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.498203   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.498209   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.500292   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.500307   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.500316   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.500326   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.500336   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.500351   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.500361   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.500373   25774 round_trippers.go:580]     Audit-Id: d118dc25-8eab-43dd-a453-09eea91ee36a
	I0912 18:42:11.500556   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.500842   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-controller-manager-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.500855   25774 pod_ready.go:81] duration metric: took 5.775247ms waiting for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.500863   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-controller-manager-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.500880   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.690301   25774 request.go:629] Waited for 189.366968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:11.690363   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:11.690369   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.690379   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.690387   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.694882   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.694902   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.694909   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.694914   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.694919   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.694925   25774 round_trippers.go:580]     Audit-Id: 96fbd2df-71d0-42f2-9668-9a4751b3b372
	I0912 18:42:11.694930   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.694943   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.695466   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wfpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"774a14f5-3c1d-4a3b-a265-290361f0fbe3","resourceVersion":"515","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0912 18:42:11.890223   25774 request.go:629] Waited for 194.343656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:11.890300   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:11.890309   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.890325   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.890341   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.893961   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:11.893979   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.893985   25774 round_trippers.go:580]     Audit-Id: d58ddf1c-05d0-4d76-9b86-e75d4563c79f
	I0912 18:42:11.893991   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.893996   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.894003   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.894011   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.894022   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.894278   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m02","uid":"0a11e94b-756b-4c81-9734-627ddcc38b98","resourceVersion":"581","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3266 chars]
	I0912 18:42:11.894534   25774 pod_ready.go:92] pod "kube-proxy-2wfpr" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:11.894547   25774 pod_ready.go:81] duration metric: took 393.659737ms waiting for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.894556   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.089913   25774 request.go:629] Waited for 195.278265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:12.089988   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:12.089997   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.090007   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.090021   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.092533   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.092550   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.092557   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.092563   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.092568   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.092573   25774 round_trippers.go:580]     Audit-Id: ab50bf62-fff6-4392-8010-8f7ac978ac19
	I0912 18:42:12.092578   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.092591   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.092750   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvnqz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d610f9be-c231-4aae-9870-e627ce41bf23","resourceVersion":"736","creationTimestamp":"2023-09-12T18:39:59Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0912 18:42:12.290497   25774 request.go:629] Waited for 197.357363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:12.290566   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:12.290571   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.290578   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.290608   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.293062   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.293078   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.293084   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.293089   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.293094   25774 round_trippers.go:580]     Audit-Id: e4b3f466-3c77-4c24-b13b-af89b75e0355
	I0912 18:42:12.293099   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.293104   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.293108   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.293215   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m03","uid":"03d033eb-43a1-4b37-a2a0-6de70662f3e7","resourceVersion":"753","creationTimestamp":"2023-09-12T18:40:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:40:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0912 18:42:12.293445   25774 pod_ready.go:92] pod "kube-proxy-fvnqz" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:12.293456   25774 pod_ready.go:81] duration metric: took 398.886453ms waiting for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.293465   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.489818   25774 request.go:629] Waited for 196.284343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:12.489872   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:12.489876   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.489884   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.489890   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.492417   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.492438   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.492447   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.492457   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.492465   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.492474   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.492483   25774 round_trippers.go:580]     Audit-Id: 8e58b70d-eb64-4ad7-8f40-d0b9d1828c0c
	I0912 18:42:12.492488   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.493218   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gp457","generateName":"kube-proxy-","namespace":"kube-system","uid":"39d70e08-cba7-4545-a6eb-a2e9152458dc","resourceVersion":"769","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I0912 18:42:12.689962   25774 request.go:629] Waited for 196.319367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:12.690069   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:12.690082   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.690089   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.690095   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.692930   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.692946   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.692952   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.692957   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.692963   25774 round_trippers.go:580]     Audit-Id: 65cc805b-a9c2-4b93-b29d-314f12cbeece
	I0912 18:42:12.692970   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.692978   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.692991   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.693385   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:12.693887   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-proxy-gp457" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:12.693914   25774 pod_ready.go:81] duration metric: took 400.443481ms waiting for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:12.693926   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-proxy-gp457" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:12.693942   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.890377   25774 request.go:629] Waited for 196.363771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:12.890460   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:12.890467   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.890477   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.890486   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.893424   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.893446   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.893456   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.893471   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.893483   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.893490   25774 round_trippers.go:580]     Audit-Id: d470dad8-0297-4d0d-a80e-ac6f86679c42
	I0912 18:42:12.893497   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.893505   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.893673   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348977","namespace":"kube-system","uid":"69ef187d-8c5d-4b26-861e-4a2178c309e7","resourceVersion":"765","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.mirror":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.seen":"2023-09-12T18:37:56.784237754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I0912 18:42:13.090457   25774 request.go:629] Waited for 196.397433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.090511   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.090515   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.090523   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.090532   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.093408   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.093432   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.093443   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.093452   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.093461   25774 round_trippers.go:580]     Audit-Id: 867db361-341c-4413-be9c-31e1e7cc54ab
	I0912 18:42:13.093470   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.093482   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.093490   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.093840   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.094121   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-scheduler-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:13.094136   25774 pod_ready.go:81] duration metric: took 400.181932ms waiting for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:13.094144   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-scheduler-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:13.094153   25774 pod_ready.go:38] duration metric: took 1.62429968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:13.094171   25774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 18:42:13.109771   25774 command_runner.go:130] > -16
	I0912 18:42:13.109820   25774 ops.go:34] apiserver oom_adj: -16
	I0912 18:42:13.109828   25774 kubeadm.go:640] restartCluster took 22.040060524s
	I0912 18:42:13.109838   25774 kubeadm.go:406] StartCluster complete in 22.067960392s
	I0912 18:42:13.109857   25774 settings.go:142] acquiring lock: {Name:mk701ee4b509c72ea6c30dd8b1ed35b0318b6f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:42:13.109946   25774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:42:13.110630   25774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3674/kubeconfig: {Name:mked094375583bdbe55c31d17add6f22f93c8430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:42:13.110874   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 18:42:13.110895   25774 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0912 18:42:13.113800   25774 out.go:177] * Enabled addons: 
	I0912 18:42:13.111083   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:13.111144   25774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:42:13.115047   25774 addons.go:502] enable addons completed in 4.162587ms: enabled=[]
	I0912 18:42:13.115270   25774 kapi.go:59] client config for multinode-348977: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key", CAFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 18:42:13.115629   25774 round_trippers.go:463] GET https://192.168.39.209:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 18:42:13.115643   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.115653   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.115662   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.118334   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.118358   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.118367   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.118375   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.118386   25774 round_trippers.go:580]     Content-Length: 291
	I0912 18:42:13.118396   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.118404   25774 round_trippers.go:580]     Audit-Id: 977f14e1-5a64-4189-aeab-98356e20ae68
	I0912 18:42:13.118415   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.118423   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.118453   25774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"689d8907-7a8c-41b5-a29a-3d911c1eccad","resourceVersion":"775","creationTimestamp":"2023-09-12T18:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0912 18:42:13.118646   25774 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-348977" context rescaled to 1 replicas
	I0912 18:42:13.118678   25774 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 18:42:13.120242   25774 out.go:177] * Verifying Kubernetes components...
	I0912 18:42:13.121523   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:13.243631   25774 command_runner.go:130] > apiVersion: v1
	I0912 18:42:13.243656   25774 command_runner.go:130] > data:
	I0912 18:42:13.243663   25774 command_runner.go:130] >   Corefile: |
	I0912 18:42:13.243669   25774 command_runner.go:130] >     .:53 {
	I0912 18:42:13.243674   25774 command_runner.go:130] >         log
	I0912 18:42:13.243681   25774 command_runner.go:130] >         errors
	I0912 18:42:13.243688   25774 command_runner.go:130] >         health {
	I0912 18:42:13.243695   25774 command_runner.go:130] >            lameduck 5s
	I0912 18:42:13.243700   25774 command_runner.go:130] >         }
	I0912 18:42:13.243708   25774 command_runner.go:130] >         ready
	I0912 18:42:13.243717   25774 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0912 18:42:13.243723   25774 command_runner.go:130] >            pods insecure
	I0912 18:42:13.243736   25774 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0912 18:42:13.243744   25774 command_runner.go:130] >            ttl 30
	I0912 18:42:13.243751   25774 command_runner.go:130] >         }
	I0912 18:42:13.243768   25774 command_runner.go:130] >         prometheus :9153
	I0912 18:42:13.243775   25774 command_runner.go:130] >         hosts {
	I0912 18:42:13.243783   25774 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0912 18:42:13.243791   25774 command_runner.go:130] >            fallthrough
	I0912 18:42:13.243797   25774 command_runner.go:130] >         }
	I0912 18:42:13.243806   25774 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0912 18:42:13.243816   25774 command_runner.go:130] >            max_concurrent 1000
	I0912 18:42:13.243822   25774 command_runner.go:130] >         }
	I0912 18:42:13.243832   25774 command_runner.go:130] >         cache 30
	I0912 18:42:13.243840   25774 command_runner.go:130] >         loop
	I0912 18:42:13.243849   25774 command_runner.go:130] >         reload
	I0912 18:42:13.243856   25774 command_runner.go:130] >         loadbalance
	I0912 18:42:13.243867   25774 command_runner.go:130] >     }
	I0912 18:42:13.243874   25774 command_runner.go:130] > kind: ConfigMap
	I0912 18:42:13.243887   25774 command_runner.go:130] > metadata:
	I0912 18:42:13.243899   25774 command_runner.go:130] >   creationTimestamp: "2023-09-12T18:38:05Z"
	I0912 18:42:13.243906   25774 command_runner.go:130] >   name: coredns
	I0912 18:42:13.243914   25774 command_runner.go:130] >   namespace: kube-system
	I0912 18:42:13.243921   25774 command_runner.go:130] >   resourceVersion: "402"
	I0912 18:42:13.243933   25774 command_runner.go:130] >   uid: 2097770d-506f-410e-985d-435a9559f646
	I0912 18:42:13.246000   25774 node_ready.go:35] waiting up to 6m0s for node "multinode-348977" to be "Ready" ...
	I0912 18:42:13.249599   25774 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0912 18:42:13.290690   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.290724   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.290733   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.290739   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.293505   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.293530   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.293550   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.293559   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.293566   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.293575   25774 round_trippers.go:580]     Audit-Id: 20c8a9a7-ec03-44a7-92e1-ea050ea6d00e
	I0912 18:42:13.293585   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.293593   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.293869   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.490632   25774 request.go:629] Waited for 196.383051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.490691   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.490698   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.490712   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.490725   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.494985   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:13.495009   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.495018   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.495032   25774 round_trippers.go:580]     Audit-Id: 5b0c52ae-aeab-43df-8442-d1ed1c51940b
	I0912 18:42:13.495040   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.495049   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.495062   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.495075   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.495435   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.996558   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.996593   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.996606   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.996614   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.999491   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.999512   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.999521   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.999529   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.999536   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.999544   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.999551   25774 round_trippers.go:580]     Audit-Id: 71109b58-96c6-4648-9138-b568a04bbb01
	I0912 18:42:13.999560   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.000156   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:14.496863   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:14.496885   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:14.496893   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:14.496899   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:14.499602   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:14.499621   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:14.499631   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.499640   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:14.499648   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:14.499657   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:14.499670   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:14 GMT
	I0912 18:42:14.499679   25774 round_trippers.go:580]     Audit-Id: 2f6f7361-0c59-4c7d-8e76-3e32fdddd5c7
	I0912 18:42:14.499882   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:14.996605   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:14.996627   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:14.996635   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:14.996642   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:14.999646   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:14.999672   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:14.999682   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:14 GMT
	I0912 18:42:14.999688   25774 round_trippers.go:580]     Audit-Id: caa22104-8cb4-422b-9f8c-58a2074742d9
	I0912 18:42:14.999693   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.999698   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:14.999703   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:14.999712   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.000041   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:15.496765   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.496794   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.496806   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.496816   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.499654   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:15.499672   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.499679   25774 round_trippers.go:580]     Audit-Id: 770efff3-c863-4db0-aed0-8b0e4f8a7f95
	I0912 18:42:15.499684   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.499689   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.499694   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.499699   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.499704   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.499880   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:15.500255   25774 node_ready.go:49] node "multinode-348977" has status "Ready":"True"
	I0912 18:42:15.500274   25774 node_ready.go:38] duration metric: took 2.254247875s waiting for node "multinode-348977" to be "Ready" ...
	I0912 18:42:15.500284   25774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:15.500345   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:15.500357   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.500368   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.500378   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.503778   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:15.503796   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.503806   25774 round_trippers.go:580]     Audit-Id: 8d81f8ed-046a-432d-a997-45f7f7e48558
	I0912 18:42:15.503816   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.503826   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.503835   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.503840   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.503845   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.506034   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"847"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83986 chars]
	I0912 18:42:15.508504   25774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:15.508567   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:15.508575   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.508582   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.508590   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.510706   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:15.510722   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.510731   25774 round_trippers.go:580]     Audit-Id: c9cdf90d-6c73-47f6-ae2c-89120b937596
	I0912 18:42:15.510739   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.510747   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.510755   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.510763   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.510771   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.511016   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:15.511382   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.511392   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.511399   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.511404   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.512961   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.512976   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.512984   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.512992   25774 round_trippers.go:580]     Audit-Id: cb80b5ee-0f06-45a7-a84b-162ab1d3304c
	I0912 18:42:15.513000   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.513006   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.513011   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.513016   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.513296   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:15.513696   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:15.513711   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.513722   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.513732   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.515569   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.515587   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.515597   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.515604   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.515609   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.515614   25774 round_trippers.go:580]     Audit-Id: d54f597f-34a0-4a2a-8871-cd8c93e54504
	I0912 18:42:15.515619   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.515625   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.515763   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:15.516137   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.516152   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.516161   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.516170   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.517873   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.517885   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.517891   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.517896   25774 round_trippers.go:580]     Audit-Id: 5c3f42bb-785e-463b-8c12-afb92be30ba6
	I0912 18:42:15.517902   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.517911   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.517925   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.517932   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.518198   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:16.019174   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:16.019195   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.019203   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.019209   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.021794   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.021810   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.021824   25774 round_trippers.go:580]     Audit-Id: 9b21fe37-c8aa-4aa9-9f91-f2ff18584580
	I0912 18:42:16.021830   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.021842   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.021854   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.021862   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.021878   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.022301   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:16.022856   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:16.022871   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.022878   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.022884   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.024919   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.024939   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.024949   25774 round_trippers.go:580]     Audit-Id: cc43d89f-08b3-4bc9-a101-2bd12aabeb44
	I0912 18:42:16.024964   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.024977   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.024986   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.024993   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.025004   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.025365   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:16.518991   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:16.519035   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.519045   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.519051   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.521811   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.521835   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.521846   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.521855   25774 round_trippers.go:580]     Audit-Id: bdd70f78-8f74-4b34-8b50-3cdda2609256
	I0912 18:42:16.521865   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.521874   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.521887   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.521899   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.522407   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:16.522902   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:16.522916   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.522923   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.522928   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.525008   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.525028   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.525037   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.525046   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.525062   25774 round_trippers.go:580]     Audit-Id: cf83a590-0ef9-44db-9645-64394ec5153e
	I0912 18:42:16.525070   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.525083   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.525094   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.525483   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.019210   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:17.019234   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.019242   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.019248   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.022180   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.022204   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.022214   25774 round_trippers.go:580]     Audit-Id: 6d0c1aa8-a449-4955-862b-564e063d1920
	I0912 18:42:17.022223   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.022233   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.022240   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.022249   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.022261   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.022965   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:17.023459   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:17.023473   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.023480   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.023486   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.025651   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.025670   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.025679   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.025688   25774 round_trippers.go:580]     Audit-Id: 1a36148e-df68-4ad3-ad5d-a35f5dec8c94
	I0912 18:42:17.025701   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.025709   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.025719   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.025727   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.026038   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.518682   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:17.518705   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.518716   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.518727   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.521775   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:17.521818   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.521829   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.521839   25774 round_trippers.go:580]     Audit-Id: f9326923-a290-4bad-abdf-2105fe92c5b4
	I0912 18:42:17.521848   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.521858   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.521868   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.521881   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.522258   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:17.522690   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:17.522702   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.522709   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.522715   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.525035   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.525050   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.525056   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.525061   25774 round_trippers.go:580]     Audit-Id: 55b1395b-82d0-4899-9ce6-224c280343e7
	I0912 18:42:17.525066   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.525071   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.525077   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.525082   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.525229   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.525477   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:18.018887   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:18.018910   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.018919   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.018925   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.021874   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:18.021896   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.021906   25774 round_trippers.go:580]     Audit-Id: ac7164e8-d532-42d7-8e73-9713804614fe
	I0912 18:42:18.021916   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.021925   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.021934   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.021941   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.021946   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.022226   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:18.022712   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:18.022732   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.022739   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.022745   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.024717   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:18.024730   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.024736   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.024741   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.024746   25774 round_trippers.go:580]     Audit-Id: dfceff8d-4f6c-43e8-bf6a-8631bb9a3cce
	I0912 18:42:18.024751   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.024756   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.024761   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.025289   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:18.518950   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:18.518978   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.518986   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.518992   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.522096   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:18.522119   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.522132   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.522138   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.522143   25774 round_trippers.go:580]     Audit-Id: fa5de5c7-191b-4b2f-beff-478320c3a667
	I0912 18:42:18.522150   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.522158   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.522166   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.522703   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:18.523115   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:18.523126   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.523133   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.523139   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.525574   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:18.525593   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.525599   25774 round_trippers.go:580]     Audit-Id: ab6931c6-4f57-4c7b-b9aa-12d3508c6379
	I0912 18:42:18.525605   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.525610   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.525615   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.525620   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.525625   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.525939   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.018605   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:19.018628   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.018636   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.018642   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.021331   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:19.021351   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.021359   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.021366   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.021374   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.021382   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.021392   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.021414   25774 round_trippers.go:580]     Audit-Id: 27419a73-847c-4f51-bd91-89e052f1edb4
	I0912 18:42:19.021912   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:19.022329   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:19.022342   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.022349   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.022355   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.024339   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:19.024359   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.024368   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.024378   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.024388   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.024395   25774 round_trippers.go:580]     Audit-Id: d84dc941-78b5-4672-a72d-ddd4cb0d7c29
	I0912 18:42:19.024409   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.024418   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.024715   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.519434   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:19.519470   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.519482   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.519491   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.522336   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:19.522358   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.522368   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.522376   25774 round_trippers.go:580]     Audit-Id: 3558f8cc-a921-4e89-84e1-6ac9cde9cd1e
	I0912 18:42:19.522385   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.522394   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.522405   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.522420   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.522736   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:19.523291   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:19.523305   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.523312   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.523317   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.525325   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:19.525338   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.525345   25774 round_trippers.go:580]     Audit-Id: aa86495f-0156-411b-a070-e22a45555259
	I0912 18:42:19.525350   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.525355   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.525361   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.525369   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.525377   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.525749   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.526064   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:20.019472   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:20.019497   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.019509   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.019520   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.022460   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.022483   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.022493   25774 round_trippers.go:580]     Audit-Id: 7716efa4-da02-44ff-bfa0-9dcb7861e619
	I0912 18:42:20.022502   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.022510   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.022518   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.022526   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.022539   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.023124   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:20.023619   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:20.023637   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.023647   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.023654   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.026215   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.026229   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.026235   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.026240   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.026246   25774 round_trippers.go:580]     Audit-Id: fa7b0ca0-8540-4605-82ab-535bcc959a68
	I0912 18:42:20.026254   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.026263   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.026272   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.026670   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:20.519423   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:20.519453   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.519465   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.519475   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.522173   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.522191   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.522201   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.522207   25774 round_trippers.go:580]     Audit-Id: 0e759373-0227-4917-8a2a-ff09025291b0
	I0912 18:42:20.522212   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.522217   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.522222   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.522229   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.522492   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:20.523018   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:20.523039   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.523047   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.523055   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.525064   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:20.525083   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.525093   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.525101   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.525112   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.525123   25774 round_trippers.go:580]     Audit-Id: e9114154-e1ac-42a2-857e-78c0a336e42e
	I0912 18:42:20.525134   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.525145   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.525296   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:21.018922   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:21.018949   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.018962   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.018985   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.021703   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.021729   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.021742   25774 round_trippers.go:580]     Audit-Id: 21cbb700-4503-4ee3-80d7-74b589e29284
	I0912 18:42:21.021750   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.021757   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.021768   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.021774   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.021784   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.022150   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:21.022724   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:21.022739   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.022746   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.022751   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.024834   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.024852   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.024872   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.024890   25774 round_trippers.go:580]     Audit-Id: eb216dd9-7990-4170-99be-ce935ee83b5b
	I0912 18:42:21.024898   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.024906   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.024912   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.024917   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.025328   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:21.519273   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:21.519299   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.519312   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.519321   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.521761   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.521787   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.521797   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.521804   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.521810   25774 round_trippers.go:580]     Audit-Id: df785d50-875d-4b1e-b8b8-fa57c4b91949
	I0912 18:42:21.521815   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.521820   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.521826   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.522091   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:21.522763   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:21.522783   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.522795   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.522804   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.525017   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.525035   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.525042   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.525047   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.525056   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.525062   25774 round_trippers.go:580]     Audit-Id: 5f96ccf0-b25d-421e-8606-0097faa881df
	I0912 18:42:21.525067   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.525072   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.525186   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:22.018823   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:22.018846   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.018854   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.018861   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.021780   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.021805   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.021816   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.021823   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.021829   25774 round_trippers.go:580]     Audit-Id: 6d1f4828-16d5-4d36-9236-531d7c6463cb
	I0912 18:42:22.021834   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.021840   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.021845   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.022058   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:22.022637   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:22.022651   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.022658   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.022666   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.025235   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.025254   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.025264   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.025274   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.025289   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.025298   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.025307   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.025314   25774 round_trippers.go:580]     Audit-Id: 28240cd4-59d0-4af1-9428-aa9546fb2eb2
	I0912 18:42:22.025722   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:22.025994   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:22.519498   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:22.519533   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.519545   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.519615   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.522127   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.522155   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.522165   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.522173   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.522182   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.522190   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.522198   25774 round_trippers.go:580]     Audit-Id: 72500589-0df9-4e05-a284-4aab07bc1a90
	I0912 18:42:22.522205   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.522539   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:22.523066   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:22.523081   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.523088   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.523093   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.526415   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:22.526434   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.526444   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.526452   25774 round_trippers.go:580]     Audit-Id: aba69887-23b0-4ad3-9591-255738e5c9cd
	I0912 18:42:22.526472   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.526480   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.526489   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.526497   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.526919   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:23.018648   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:23.018677   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.018688   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.018697   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.021371   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.021393   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.021401   25774 round_trippers.go:580]     Audit-Id: 308ed084-779c-4b7e-a6f9-8d335954f26d
	I0912 18:42:23.021409   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.021417   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.021426   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.021433   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.021442   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.021652   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:23.022261   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:23.022276   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.022283   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.022296   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.024418   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.024438   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.024447   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.024460   25774 round_trippers.go:580]     Audit-Id: f8385c41-1ac3-4937-8e08-0853d2f07b61
	I0912 18:42:23.024472   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.024480   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.024493   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.024502   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.024636   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:23.519355   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:23.519379   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.519387   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.519393   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.521923   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.521935   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.521941   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.521946   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.521952   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.521961   25774 round_trippers.go:580]     Audit-Id: 8df3aec7-70dd-46ad-9b0c-60e47006f66d
	I0912 18:42:23.521967   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.521972   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.522313   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:23.522786   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:23.522800   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.522807   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.522813   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.525090   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.525102   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.525108   25774 round_trippers.go:580]     Audit-Id: c5a51933-8848-4d9d-86b8-cfa9a1715c83
	I0912 18:42:23.525113   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.525118   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.525123   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.525128   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.525133   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.525525   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.019244   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:24.019267   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.019275   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.019281   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.022149   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.022178   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.022184   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.022190   25774 round_trippers.go:580]     Audit-Id: 0e54f509-fea2-4447-95e7-0adef2cc4a71
	I0912 18:42:24.022195   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.022206   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.022214   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.022224   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.022566   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:24.023022   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:24.023034   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.023041   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.023047   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.025255   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.025267   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.025273   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.025278   25774 round_trippers.go:580]     Audit-Id: 4012be6b-a1d1-4822-8153-07785c0f087c
	I0912 18:42:24.025285   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.025290   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.025295   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.025300   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.025513   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.519223   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:24.519245   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.519253   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.519259   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.521740   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.521759   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.521769   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.521778   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.521806   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.521820   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.521828   25774 round_trippers.go:580]     Audit-Id: dd7800e3-c33b-4d72-b2a4-1f435c3588d6
	I0912 18:42:24.521838   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.522426   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:24.522882   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:24.522894   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.522904   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.522916   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.524935   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.524951   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.524966   25774 round_trippers.go:580]     Audit-Id: e70caa01-d717-4aa8-b453-e6b24304ecb7
	I0912 18:42:24.524975   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.524987   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.524992   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.524997   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.525003   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.525181   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.525544   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:25.018770   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:25.018797   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.018809   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.018818   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.021606   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.021658   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.021669   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.021678   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.021685   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.021697   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.021709   25774 round_trippers.go:580]     Audit-Id: a707fbe4-5e5a-4a76-9552-ae18693b3ade
	I0912 18:42:25.021718   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.023780   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:25.024379   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:25.024398   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.024408   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.024426   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.026655   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.026674   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.026683   25774 round_trippers.go:580]     Audit-Id: 1d90b0fb-50df-4d38-bf46-db8bc42a342b
	I0912 18:42:25.026691   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.026701   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.026709   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.026718   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.026726   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.026889   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:25.519646   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:25.519678   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.519688   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.519694   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.522377   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.522402   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.522425   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.522434   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.522443   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.522455   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.522465   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.522475   25774 round_trippers.go:580]     Audit-Id: 92944b9c-849f-477a-8160-683445d1d4a8
	I0912 18:42:25.523055   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:25.523492   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:25.523505   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.523512   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.523517   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.526015   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.526034   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.526046   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.526055   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.526071   25774 round_trippers.go:580]     Audit-Id: 1ee1e622-a49f-4d1d-bc0d-11e709dd8dda
	I0912 18:42:25.526079   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.526090   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.526096   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.526218   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:26.019099   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:26.019117   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.019125   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.019131   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.022416   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:26.022440   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.022450   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.022456   25774 round_trippers.go:580]     Audit-Id: 49021994-f425-426a-b645-d11b0bef6ff2
	I0912 18:42:26.022461   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.022469   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.022474   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.022480   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.023077   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:26.023486   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:26.023497   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.023504   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.023509   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.026077   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.026094   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.026104   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.026111   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.026118   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.026126   25774 round_trippers.go:580]     Audit-Id: 52a298de-ed90-4986-b7be-58541206edef
	I0912 18:42:26.026135   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.026145   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.026364   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:26.519031   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:26.519052   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.519060   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.519067   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.521656   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.521678   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.521686   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.521691   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.521696   25774 round_trippers.go:580]     Audit-Id: 0ff6011d-720e-449b-8eb1-46b0e14ea217
	I0912 18:42:26.521701   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.521706   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.521711   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.522134   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:26.522540   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:26.522554   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.522560   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.522566   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.524786   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.524800   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.524806   25774 round_trippers.go:580]     Audit-Id: 2d945f25-12df-4442-8171-8110b7ec953e
	I0912 18:42:26.524811   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.524819   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.524827   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.524836   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.524845   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.525119   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.018765   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:27.018794   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.018805   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.018815   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.025520   25774 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0912 18:42:27.025543   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.025553   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.025560   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.025567   25774 round_trippers.go:580]     Audit-Id: 6018ab1b-6d81-43ce-9088-f6d64d3ef8f9
	I0912 18:42:27.025576   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.025584   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.025591   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.025734   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"882","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6722 chars]
	I0912 18:42:27.026159   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.026170   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.026177   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.026183   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.029454   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:27.029469   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.029476   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.029481   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.029486   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.029491   25774 round_trippers.go:580]     Audit-Id: 058f3ee6-b56c-4d93-b76e-c92601975585
	I0912 18:42:27.029497   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.029506   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.029625   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.029892   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:27.519325   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:27.519347   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.519355   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.519361   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.521737   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.521753   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.521760   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.521765   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.521771   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.521776   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.521781   25774 round_trippers.go:580]     Audit-Id: 0685a0df-f7eb-4093-ab97-48796cc84165
	I0912 18:42:27.521789   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.522261   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0912 18:42:27.522688   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.522699   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.522706   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.522712   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.524650   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.524669   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.524679   25774 round_trippers.go:580]     Audit-Id: fd53992f-f915-482b-91e2-7915a59fa965
	I0912 18:42:27.524688   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.524696   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.524707   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.524715   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.524739   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.525068   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.525328   25774 pod_ready.go:92] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.525341   25774 pod_ready.go:81] duration metric: took 12.016818518s waiting for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.525348   25774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.525392   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348977
	I0912 18:42:27.525399   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.525406   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.525411   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.527348   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.527362   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.527368   25774 round_trippers.go:580]     Audit-Id: 7c82a007-a8b9-458c-b72f-b5158f5d9f79
	I0912 18:42:27.527373   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.527379   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.527384   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.527392   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.527397   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.527569   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"870","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0912 18:42:27.527970   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.527988   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.527999   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.528008   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.529544   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.529556   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.529562   25774 round_trippers.go:580]     Audit-Id: 49c08a23-43c6-4b36-97cd-cdf623268d39
	I0912 18:42:27.529567   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.529572   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.529580   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.529585   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.529590   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.529750   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.530037   25774 pod_ready.go:92] pod "etcd-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.530051   25774 pod_ready.go:81] duration metric: took 4.69789ms waiting for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.530068   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.530109   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348977
	I0912 18:42:27.530119   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.530129   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.530140   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.532020   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.532031   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.532036   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.532041   25774 round_trippers.go:580]     Audit-Id: 69b6d28a-cc81-4865-a415-98d5e4ab2e88
	I0912 18:42:27.532046   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.532052   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.532061   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.532066   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.532210   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348977","namespace":"kube-system","uid":"f540dfd0-b1d9-4e3f-b9ab-f02db770e920","resourceVersion":"857","creationTimestamp":"2023-09-12T18:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.209:8443","kubernetes.io/config.hash":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.mirror":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.seen":"2023-09-12T18:38:05.461231178Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I0912 18:42:27.532613   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.532626   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.532633   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.532639   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.534337   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.534348   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.534354   25774 round_trippers.go:580]     Audit-Id: d8ed022c-9bdc-426c-8417-9bdbab3e0568
	I0912 18:42:27.534359   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.534364   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.534368   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.534373   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.534378   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.534556   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.534912   25774 pod_ready.go:92] pod "kube-apiserver-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.534932   25774 pod_ready.go:81] duration metric: took 4.857194ms waiting for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.534941   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.535010   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348977
	I0912 18:42:27.535020   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.535026   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.535032   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.536478   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.536489   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.536498   25774 round_trippers.go:580]     Audit-Id: dc26cd07-5e2b-418c-81e4-ed7f5f4cea37
	I0912 18:42:27.536506   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.536520   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.536528   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.536540   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.536552   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.536872   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348977","namespace":"kube-system","uid":"930d0357-f21e-4a4e-8c3b-2cff3263568f","resourceVersion":"851","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.mirror":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.seen":"2023-09-12T18:37:56.784236763Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I0912 18:42:27.537190   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.537201   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.537208   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.537213   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.539168   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.539187   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.539196   25774 round_trippers.go:580]     Audit-Id: 96077995-eaf1-4ae5-816a-8a44fe54d0e0
	I0912 18:42:27.539205   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.539217   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.539225   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.539236   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.539247   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.539389   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.539705   25774 pod_ready.go:92] pod "kube-controller-manager-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.539721   25774 pod_ready.go:81] duration metric: took 4.774197ms waiting for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.539730   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.539778   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:27.539785   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.539792   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.539797   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.541391   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.541405   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.541412   25774 round_trippers.go:580]     Audit-Id: d5988258-541c-4b62-b811-17340c9d4c61
	I0912 18:42:27.541417   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.541422   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.541429   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.541436   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.541443   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.541635   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wfpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"774a14f5-3c1d-4a3b-a265-290361f0fbe3","resourceVersion":"515","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0912 18:42:27.541939   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:27.541951   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.541957   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.541962   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.543735   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.543753   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.543762   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.543770   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.543779   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.543787   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.543795   25774 round_trippers.go:580]     Audit-Id: c439d5a2-848f-462c-8997-8b09354202f6
	I0912 18:42:27.543803   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.544002   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m02","uid":"0a11e94b-756b-4c81-9734-627ddcc38b98","resourceVersion":"581","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3266 chars]
	I0912 18:42:27.544241   25774 pod_ready.go:92] pod "kube-proxy-2wfpr" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.544255   25774 pod_ready.go:81] duration metric: took 4.520204ms waiting for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.544264   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.719627   25774 request.go:629] Waited for 175.317143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:27.719692   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:27.719702   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.719713   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.719724   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.722697   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.722720   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.722730   25774 round_trippers.go:580]     Audit-Id: d3483f79-1d80-489b-9726-e0bcfc0757be
	I0912 18:42:27.722738   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.722746   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.722754   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.722762   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.722770   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.722965   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvnqz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d610f9be-c231-4aae-9870-e627ce41bf23","resourceVersion":"736","creationTimestamp":"2023-09-12T18:39:59Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0912 18:42:27.919793   25774 request.go:629] Waited for 196.375026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:27.919854   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:27.919859   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.919866   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.919873   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.922352   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.922369   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.922376   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.922381   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.922386   25774 round_trippers.go:580]     Audit-Id: 18dc451a-97b4-4669-a8d1-fe83de2c3208
	I0912 18:42:27.922391   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.922396   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.922401   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.922552   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m03","uid":"03d033eb-43a1-4b37-a2a0-6de70662f3e7","resourceVersion":"753","creationTimestamp":"2023-09-12T18:40:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:40:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0912 18:42:27.922880   25774 pod_ready.go:92] pod "kube-proxy-fvnqz" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.922898   25774 pod_ready.go:81] duration metric: took 378.627886ms waiting for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.922913   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.120317   25774 request.go:629] Waited for 197.342872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:28.120378   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:28.120383   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.120397   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.120412   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.123127   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.123147   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.123154   25774 round_trippers.go:580]     Audit-Id: 091153d1-359b-4f12-a3a3-ccdbdc81297d
	I0912 18:42:28.123160   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.123165   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.123170   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.123175   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.123181   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.123500   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gp457","generateName":"kube-proxy-","namespace":"kube-system","uid":"39d70e08-cba7-4545-a6eb-a2e9152458dc","resourceVersion":"844","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I0912 18:42:28.320266   25774 request.go:629] Waited for 196.341863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.320310   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.320315   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.320322   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.320328   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.322784   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.322801   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.322807   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.322812   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.322817   25774 round_trippers.go:580]     Audit-Id: 7a971df4-048d-411f-84d5-edeca5d0a808
	I0912 18:42:28.322822   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.322830   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.322838   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.323280   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:28.323558   25774 pod_ready.go:92] pod "kube-proxy-gp457" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:28.323569   25774 pod_ready.go:81] duration metric: took 400.650162ms waiting for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.323577   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.519997   25774 request.go:629] Waited for 196.359932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:28.520052   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:28.520057   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.520064   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.520070   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.522614   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.522643   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.522651   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.522657   25774 round_trippers.go:580]     Audit-Id: 81598ada-aa63-48f8-bbb7-bb3b59d03fca
	I0912 18:42:28.522663   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.522671   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.522676   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.522682   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.523030   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348977","namespace":"kube-system","uid":"69ef187d-8c5d-4b26-861e-4a2178c309e7","resourceVersion":"850","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.mirror":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.seen":"2023-09-12T18:37:56.784237754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I0912 18:42:28.719797   25774 request.go:629] Waited for 196.343169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.719852   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.719857   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.719864   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.719870   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.722628   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.722647   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.722657   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.722665   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.722670   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.722690   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.722704   25774 round_trippers.go:580]     Audit-Id: 09261421-b5b9-47f1-8400-375ba280b4aa
	I0912 18:42:28.722709   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.723026   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:28.723300   25774 pod_ready.go:92] pod "kube-scheduler-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:28.723312   25774 pod_ready.go:81] duration metric: took 399.729056ms waiting for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.723321   25774 pod_ready.go:38] duration metric: took 13.223027127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:28.723336   25774 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:42:28.723377   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:28.736121   25774 command_runner.go:130] > 1613
	I0912 18:42:28.736176   25774 api_server.go:72] duration metric: took 15.617469319s to wait for apiserver process to appear ...
	I0912 18:42:28.736186   25774 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:42:28.736202   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:28.742568   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:28.742691   25774 round_trippers.go:463] GET https://192.168.39.209:8443/version
	I0912 18:42:28.742706   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.742717   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.742742   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.743597   25774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0912 18:42:28.743611   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.743617   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.743622   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.743628   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.743635   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.743644   25774 round_trippers.go:580]     Content-Length: 263
	I0912 18:42:28.743652   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.743664   25774 round_trippers.go:580]     Audit-Id: 1f819583-ec91-4248-8d1d-f0faa5cdc977
	I0912 18:42:28.743686   25774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 18:42:28.743734   25774 api_server.go:141] control plane version: v1.28.1
	I0912 18:42:28.743746   25774 api_server.go:131] duration metric: took 7.554171ms to wait for apiserver health ...
	I0912 18:42:28.743753   25774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:42:28.920155   25774 request.go:629] Waited for 176.33099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:28.920222   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:28.920228   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.920239   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.920248   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.924609   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:28.924625   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.924631   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.924637   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.924642   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.924647   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.924652   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.924657   25774 round_trippers.go:580]     Audit-Id: 61199736-c30a-4f20-a0fe-85ab567c6748
	I0912 18:42:28.926078   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0912 18:42:28.928488   25774 system_pods.go:59] 12 kube-system pods found
	I0912 18:42:28.928508   25774 system_pods.go:61] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running
	I0912 18:42:28.928516   25774 system_pods.go:61] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running
	I0912 18:42:28.928521   25774 system_pods.go:61] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:28.928529   25774 system_pods.go:61] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:28.928543   25774 system_pods.go:61] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running
	I0912 18:42:28.928549   25774 system_pods.go:61] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running
	I0912 18:42:28.928556   25774 system_pods.go:61] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running
	I0912 18:42:28.928564   25774 system_pods.go:61] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:28.928568   25774 system_pods.go:61] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:28.928575   25774 system_pods.go:61] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running
	I0912 18:42:28.928579   25774 system_pods.go:61] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running
	I0912 18:42:28.928583   25774 system_pods.go:61] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running
	I0912 18:42:28.928589   25774 system_pods.go:74] duration metric: took 184.827503ms to wait for pod list to return data ...
	I0912 18:42:28.928596   25774 default_sa.go:34] waiting for default service account to be created ...
	I0912 18:42:29.120018   25774 request.go:629] Waited for 191.358708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/default/serviceaccounts
	I0912 18:42:29.120097   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/default/serviceaccounts
	I0912 18:42:29.120104   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.120112   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.120126   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.123049   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:29.123069   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.123079   25774 round_trippers.go:580]     Content-Length: 261
	I0912 18:42:29.123088   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.123097   25774 round_trippers.go:580]     Audit-Id: 0cac5d85-cbe1-4c12-91ee-4a50deb388eb
	I0912 18:42:29.123106   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.123115   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.123122   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.123128   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.123154   25774 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"55ef2ca3-3fa0-482c-9704-129c61fdc121","resourceVersion":"365","creationTimestamp":"2023-09-12T18:38:17Z"}}]}
	I0912 18:42:29.123368   25774 default_sa.go:45] found service account: "default"
	I0912 18:42:29.123387   25774 default_sa.go:55] duration metric: took 194.785544ms for default service account to be created ...
	I0912 18:42:29.123402   25774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 18:42:29.319837   25774 request.go:629] Waited for 196.373018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:29.319891   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:29.319922   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.319951   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.319971   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.324234   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:29.324257   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.324267   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.324275   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.324283   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.324293   25774 round_trippers.go:580]     Audit-Id: 30f84bd7-2fdc-4719-8ffc-f3f8ff44f576
	I0912 18:42:29.324301   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.324310   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.325766   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0912 18:42:29.328222   25774 system_pods.go:86] 12 kube-system pods found
	I0912 18:42:29.328242   25774 system_pods.go:89] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running
	I0912 18:42:29.328247   25774 system_pods.go:89] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running
	I0912 18:42:29.328252   25774 system_pods.go:89] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:29.328257   25774 system_pods.go:89] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:29.328263   25774 system_pods.go:89] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running
	I0912 18:42:29.328270   25774 system_pods.go:89] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running
	I0912 18:42:29.328277   25774 system_pods.go:89] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running
	I0912 18:42:29.328292   25774 system_pods.go:89] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:29.328298   25774 system_pods.go:89] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:29.328302   25774 system_pods.go:89] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running
	I0912 18:42:29.328307   25774 system_pods.go:89] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running
	I0912 18:42:29.328310   25774 system_pods.go:89] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running
	I0912 18:42:29.328316   25774 system_pods.go:126] duration metric: took 204.909135ms to wait for k8s-apps to be running ...
	I0912 18:42:29.328325   25774 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 18:42:29.328370   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:29.341359   25774 system_svc.go:56] duration metric: took 13.030228ms WaitForService to wait for kubelet.
	I0912 18:42:29.341381   25774 kubeadm.go:581] duration metric: took 16.222676844s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 18:42:29.341399   25774 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:42:29.519828   25774 request.go:629] Waited for 178.364112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:29.519911   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:29.519918   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.519929   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.519940   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.522725   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:29.522742   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.522749   25774 round_trippers.go:580]     Audit-Id: e00e13ba-2677-4829-b344-8ada38a7e166
	I0912 18:42:29.522755   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.522762   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.522770   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.522782   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.522797   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.523112   25774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
	I0912 18:42:29.523631   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523649   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523658   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523664   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523677   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523686   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523694   25774 node_conditions.go:105] duration metric: took 182.290333ms to run NodePressure ...
	I0912 18:42:29.523707   25774 start.go:228] waiting for startup goroutines ...
	I0912 18:42:29.523715   25774 start.go:233] waiting for cluster config update ...
	I0912 18:42:29.523724   25774 start.go:242] writing updated cluster config ...
	I0912 18:42:29.524158   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:29.524248   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:29.527653   25774 out.go:177] * Starting worker node multinode-348977-m02 in cluster multinode-348977
	I0912 18:42:29.529157   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:42:29.529180   25774 cache.go:57] Caching tarball of preloaded images
	I0912 18:42:29.529277   25774 preload.go:174] Found /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:42:29.529288   25774 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:42:29.529376   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:29.529545   25774 start.go:365] acquiring machines lock for multinode-348977-m02: {Name:mkb814e9f5e9709f943ea910e0cc7d91215dc74f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:42:29.529588   25774 start.go:369] acquired machines lock for "multinode-348977-m02" in 23.462µs
	I0912 18:42:29.529606   25774 start.go:96] Skipping create...Using existing machine configuration
	I0912 18:42:29.529615   25774 fix.go:54] fixHost starting: m02
	I0912 18:42:29.529896   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:29.529918   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:29.543842   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0912 18:42:29.544256   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:29.544682   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:42:29.544708   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:29.544985   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:29.545132   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:29.545265   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetState
	I0912 18:42:29.546866   25774 fix.go:102] recreateIfNeeded on multinode-348977-m02: state=Stopped err=<nil>
	I0912 18:42:29.546891   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	W0912 18:42:29.547062   25774 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 18:42:29.548960   25774 out.go:177] * Restarting existing kvm2 VM for "multinode-348977-m02" ...
	I0912 18:42:29.550233   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .Start
	I0912 18:42:29.550396   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring networks are active...
	I0912 18:42:29.551115   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring network default is active
	I0912 18:42:29.551433   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring network mk-multinode-348977 is active
	I0912 18:42:29.551771   25774 main.go:141] libmachine: (multinode-348977-m02) Getting domain xml...
	I0912 18:42:29.552344   25774 main.go:141] libmachine: (multinode-348977-m02) Creating domain...
	I0912 18:42:30.767498   25774 main.go:141] libmachine: (multinode-348977-m02) Waiting to get IP...
	I0912 18:42:30.768372   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:30.768756   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:30.768796   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:30.768731   26026 retry.go:31] will retry after 235.940556ms: waiting for machine to come up
	I0912 18:42:31.006160   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.006647   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.006677   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.006603   26026 retry.go:31] will retry after 364.360851ms: waiting for machine to come up
	I0912 18:42:31.372196   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.372728   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.372759   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.372673   26026 retry.go:31] will retry after 381.551229ms: waiting for machine to come up
	I0912 18:42:31.756143   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.756569   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.756596   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.756516   26026 retry.go:31] will retry after 467.043566ms: waiting for machine to come up
	I0912 18:42:32.225092   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:32.225542   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:32.225565   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:32.225522   26026 retry.go:31] will retry after 717.918575ms: waiting for machine to come up
	I0912 18:42:32.944665   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:32.944984   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:32.945013   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:32.944938   26026 retry.go:31] will retry after 777.588344ms: waiting for machine to come up
	I0912 18:42:33.723615   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:33.724005   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:33.724028   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:33.723989   26026 retry.go:31] will retry after 1.005231305s: waiting for machine to come up
	I0912 18:42:34.730358   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:34.730734   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:34.730770   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:34.730686   26026 retry.go:31] will retry after 958.78563ms: waiting for machine to come up
	I0912 18:42:35.690983   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:35.691399   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:35.691421   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:35.691373   26026 retry.go:31] will retry after 1.539184895s: waiting for machine to come up
	I0912 18:42:37.231731   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:37.232165   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:37.232197   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:37.232143   26026 retry.go:31] will retry after 2.237252703s: waiting for machine to come up
	I0912 18:42:39.472512   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:39.472959   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:39.473011   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:39.472905   26026 retry.go:31] will retry after 2.152692302s: waiting for machine to come up
	I0912 18:42:41.627680   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:41.628098   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:41.628133   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:41.628032   26026 retry.go:31] will retry after 2.890854285s: waiting for machine to come up
	I0912 18:42:44.521895   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:44.522238   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:44.522262   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:44.522192   26026 retry.go:31] will retry after 2.979799431s: waiting for machine to come up
	I0912 18:42:47.505585   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.506105   25774 main.go:141] libmachine: (multinode-348977-m02) Found IP for machine: 192.168.39.55
	I0912 18:42:47.506134   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has current primary IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.506144   25774 main.go:141] libmachine: (multinode-348977-m02) Reserving static IP address...
	I0912 18:42:47.506564   25774 main.go:141] libmachine: (multinode-348977-m02) Reserved static IP address: 192.168.39.55
	I0912 18:42:47.506615   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "multinode-348977-m02", mac: "52:54:00:fb:c0:ce", ip: "192.168.39.55"} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.506635   25774 main.go:141] libmachine: (multinode-348977-m02) Waiting for SSH to be available...
	I0912 18:42:47.506659   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | skip adding static IP to network mk-multinode-348977 - found existing host DHCP lease matching {name: "multinode-348977-m02", mac: "52:54:00:fb:c0:ce", ip: "192.168.39.55"}
	I0912 18:42:47.506681   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Getting to WaitForSSH function...
	I0912 18:42:47.508611   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.508965   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.508992   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.509119   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Using SSH client type: external
	I0912 18:42:47.509153   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa (-rw-------)
	I0912 18:42:47.509178   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:42:47.509190   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | About to run SSH command:
	I0912 18:42:47.509201   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | exit 0
	I0912 18:42:47.594719   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | SSH cmd err, output: <nil>: 
	I0912 18:42:47.595034   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetConfigRaw
	I0912 18:42:47.595656   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:47.598153   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.598542   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.598576   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.598809   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:47.599008   25774 machine.go:88] provisioning docker machine ...
	I0912 18:42:47.599027   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:47.599233   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.599393   25774 buildroot.go:166] provisioning hostname "multinode-348977-m02"
	I0912 18:42:47.599410   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.599573   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.601705   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.602082   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.602107   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.602240   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:47.602444   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.602620   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.602777   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:47.602919   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:47.603241   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:47.603262   25774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348977-m02 && echo "multinode-348977-m02" | sudo tee /etc/hostname
	I0912 18:42:47.727967   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348977-m02
	
	I0912 18:42:47.727992   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.730980   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.731324   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.731357   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.731546   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:47.731734   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.731942   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.732071   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:47.732251   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:47.732720   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:47.732751   25774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348977-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348977-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348977-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:42:47.851882   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:42:47.851910   25774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3674/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3674/.minikube}
	I0912 18:42:47.851930   25774 buildroot.go:174] setting up certificates
	I0912 18:42:47.851944   25774 provision.go:83] configureAuth start
	I0912 18:42:47.851961   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.852222   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:47.854839   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.855194   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.855226   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.855337   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.857401   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.857747   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.857778   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.857894   25774 provision.go:138] copyHostCerts
	I0912 18:42:47.857926   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:42:47.857965   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem, removing ...
	I0912 18:42:47.857979   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:42:47.858051   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem (1123 bytes)
	I0912 18:42:47.858137   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:42:47.858162   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem, removing ...
	I0912 18:42:47.858172   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:42:47.858209   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem (1675 bytes)
	I0912 18:42:47.858270   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:42:47.858293   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem, removing ...
	I0912 18:42:47.858300   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:42:47.858334   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem (1078 bytes)
	I0912 18:42:47.858394   25774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem org=jenkins.multinode-348977-m02 san=[192.168.39.55 192.168.39.55 localhost 127.0.0.1 minikube multinode-348977-m02]
	I0912 18:42:48.213648   25774 provision.go:172] copyRemoteCerts
	I0912 18:42:48.213711   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:42:48.213739   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.216496   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.216875   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.216910   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.217086   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.217304   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.217440   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.217540   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:48.299272   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 18:42:48.299343   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 18:42:48.323067   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 18:42:48.323135   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0912 18:42:48.346811   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 18:42:48.346879   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 18:42:48.369076   25774 provision.go:86] duration metric: configureAuth took 517.116419ms
	I0912 18:42:48.369101   25774 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:42:48.369320   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:48.369360   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:48.369693   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.372404   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.372825   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.372851   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.373017   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.373198   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.373387   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.373552   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.373737   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.374095   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.374108   25774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 18:42:48.484155   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 18:42:48.484175   25774 buildroot.go:70] root file system type: tmpfs
	I0912 18:42:48.484269   25774 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 18:42:48.484284   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.486806   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.487163   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.487199   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.487339   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.487537   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.487696   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.487860   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.487993   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.488283   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.488362   25774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.209"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 18:42:48.611547   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.209
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 18:42:48.611575   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.614223   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.614651   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.614685   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.614810   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.615012   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.615161   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.615320   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.615531   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.615880   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.615910   25774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 18:42:49.491968   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 18:42:49.491990   25774 machine.go:91] provisioned docker machine in 1.892968996s
	I0912 18:42:49.492001   25774 start.go:300] post-start starting for "multinode-348977-m02" (driver="kvm2")
	I0912 18:42:49.492011   25774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:42:49.492033   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.492389   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:42:49.492428   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.495587   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.496039   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.496074   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.496235   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.496409   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.496557   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.496709   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.580048   25774 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:42:49.584124   25774 command_runner.go:130] > NAME=Buildroot
	I0912 18:42:49.584146   25774 command_runner.go:130] > VERSION=2021.02.12-1-gaa74cea-dirty
	I0912 18:42:49.584153   25774 command_runner.go:130] > ID=buildroot
	I0912 18:42:49.584161   25774 command_runner.go:130] > VERSION_ID=2021.02.12
	I0912 18:42:49.584168   25774 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0912 18:42:49.584316   25774 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:42:49.584334   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/addons for local assets ...
	I0912 18:42:49.584409   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/files for local assets ...
	I0912 18:42:49.584509   25774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> 108482.pem in /etc/ssl/certs
	I0912 18:42:49.584523   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /etc/ssl/certs/108482.pem
	I0912 18:42:49.584635   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 18:42:49.592773   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:42:49.617623   25774 start.go:303] post-start completed in 125.608825ms
	I0912 18:42:49.617646   25774 fix.go:56] fixHost completed within 20.088031606s
	I0912 18:42:49.617665   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.620435   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.620845   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.620869   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.621069   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.621262   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.621404   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.621570   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.621758   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:49.622052   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:49.622063   25774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 18:42:49.731465   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694544169.678612481
	
	I0912 18:42:49.731485   25774 fix.go:206] guest clock: 1694544169.678612481
	I0912 18:42:49.731492   25774 fix.go:219] Guest: 2023-09-12 18:42:49.678612481 +0000 UTC Remote: 2023-09-12 18:42:49.617649209 +0000 UTC m=+83.981581209 (delta=60.963272ms)
	I0912 18:42:49.731504   25774 fix.go:190] guest clock delta is within tolerance: 60.963272ms
	I0912 18:42:49.731513   25774 start.go:83] releasing machines lock for "multinode-348977-m02", held for 20.201911405s
	I0912 18:42:49.731541   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.731783   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:49.734410   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.734890   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.734925   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.737058   25774 out.go:177] * Found network options:
	I0912 18:42:49.738484   25774 out.go:177]   - NO_PROXY=192.168.39.209
	W0912 18:42:49.739948   25774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 18:42:49.739975   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740468   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740681   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740737   25774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:42:49.740784   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	W0912 18:42:49.740858   25774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 18:42:49.740942   25774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 18:42:49.740979   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.743639   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.743671   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744084   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.744116   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744145   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.744165   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744240   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.744410   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.744416   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.744596   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.744599   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.744774   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.744769   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.744886   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.854820   25774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 18:42:49.855190   25774 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 18:42:49.855233   25774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:42:49.855293   25774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:42:49.872592   25774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0912 18:42:49.872900   25774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:42:49.872923   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:42:49.873033   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:42:49.890584   25774 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0912 18:42:49.891097   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:42:49.901256   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:42:49.911217   25774 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:42:49.911258   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:42:49.921924   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:42:49.932287   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:42:49.942216   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:42:49.952004   25774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:42:49.962020   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:42:49.971792   25774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:42:49.980297   25774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 18:42:49.980393   25774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:42:49.989544   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:50.094046   25774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:42:50.115209   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:42:50.115284   25774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 18:42:50.127032   25774 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0912 18:42:50.127923   25774 command_runner.go:130] > [Unit]
	I0912 18:42:50.127939   25774 command_runner.go:130] > Description=Docker Application Container Engine
	I0912 18:42:50.127944   25774 command_runner.go:130] > Documentation=https://docs.docker.com
	I0912 18:42:50.127950   25774 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0912 18:42:50.127955   25774 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0912 18:42:50.127962   25774 command_runner.go:130] > StartLimitBurst=3
	I0912 18:42:50.127966   25774 command_runner.go:130] > StartLimitIntervalSec=60
	I0912 18:42:50.127971   25774 command_runner.go:130] > [Service]
	I0912 18:42:50.127976   25774 command_runner.go:130] > Type=notify
	I0912 18:42:50.127985   25774 command_runner.go:130] > Restart=on-failure
	I0912 18:42:50.127996   25774 command_runner.go:130] > Environment=NO_PROXY=192.168.39.209
	I0912 18:42:50.128008   25774 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0912 18:42:50.128019   25774 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0912 18:42:50.128032   25774 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0912 18:42:50.128039   25774 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0912 18:42:50.128046   25774 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0912 18:42:50.128053   25774 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0912 18:42:50.128062   25774 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0912 18:42:50.128071   25774 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0912 18:42:50.128083   25774 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0912 18:42:50.128090   25774 command_runner.go:130] > ExecStart=
	I0912 18:42:50.128114   25774 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0912 18:42:50.128127   25774 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0912 18:42:50.128134   25774 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0912 18:42:50.128140   25774 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0912 18:42:50.128145   25774 command_runner.go:130] > LimitNOFILE=infinity
	I0912 18:42:50.128149   25774 command_runner.go:130] > LimitNPROC=infinity
	I0912 18:42:50.128154   25774 command_runner.go:130] > LimitCORE=infinity
	I0912 18:42:50.128161   25774 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0912 18:42:50.128168   25774 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0912 18:42:50.128178   25774 command_runner.go:130] > TasksMax=infinity
	I0912 18:42:50.128185   25774 command_runner.go:130] > TimeoutStartSec=0
	I0912 18:42:50.128197   25774 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0912 18:42:50.128208   25774 command_runner.go:130] > Delegate=yes
	I0912 18:42:50.128218   25774 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0912 18:42:50.128256   25774 command_runner.go:130] > KillMode=process
	I0912 18:42:50.128266   25774 command_runner.go:130] > [Install]
	I0912 18:42:50.128275   25774 command_runner.go:130] > WantedBy=multi-user.target
	I0912 18:42:50.128490   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:42:50.140278   25774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 18:42:50.156780   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:42:50.169134   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:42:50.180997   25774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:42:50.207570   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:42:50.221227   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:42:50.237822   25774 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0912 18:42:50.238226   25774 ssh_runner.go:195] Run: which cri-dockerd
	I0912 18:42:50.241514   25774 command_runner.go:130] > /usr/bin/cri-dockerd
	I0912 18:42:50.241908   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 18:42:50.250024   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 18:42:50.269261   25774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 18:42:50.375301   25774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 18:42:50.482348   25774 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 18:42:50.482378   25774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 18:42:50.499144   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:50.600957   25774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 18:42:52.035593   25774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4345964s)
	I0912 18:42:52.035674   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:42:52.134695   25774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 18:42:52.252441   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:42:52.363710   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:52.471147   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 18:42:52.484624   25774 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0912 18:42:52.486932   25774 out.go:177] 
	W0912 18:42:52.488525   25774 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0912 18:42:52.488540   25774 out.go:239] * 
	* 
	W0912 18:42:52.489285   25774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 18:42:52.491138   25774 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-348977" : exit status 90
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-348977
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-348977 -n multinode-348977
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-348977 logs -n 25: (1.239926268s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977:/home/docker/cp-test_multinode-348977-m02_multinode-348977.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977 sudo cat                                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m02_multinode-348977.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03:/home/docker/cp-test_multinode-348977-m02_multinode-348977-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977-m03 sudo cat                                   | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m02_multinode-348977-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp testdata/cp-test.txt                                                | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977:/home/docker/cp-test_multinode-348977-m03_multinode-348977.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977 sudo cat                                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m03_multinode-348977.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02:/home/docker/cp-test_multinode-348977-m03_multinode-348977-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977-m02 sudo cat                                   | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m03_multinode-348977-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-348977 node stop m03                                                          | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	| node    | multinode-348977 node start                                                             | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-348977                                                                | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC |                     |
	| stop    | -p multinode-348977                                                                     | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:41 UTC |
	| start   | -p multinode-348977                                                                     | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:41 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-348977                                                                | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:42 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:41:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:41:25.667613   25774 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:41:25.667734   25774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:41:25.667744   25774 out.go:309] Setting ErrFile to fd 2...
	I0912 18:41:25.667751   25774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:41:25.667992   25774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:41:25.668537   25774 out.go:303] Setting JSON to false
	I0912 18:41:25.675371   25774 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1436,"bootTime":1694542650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:41:25.675445   25774 start.go:138] virtualization: kvm guest
	I0912 18:41:25.677679   25774 out.go:177] * [multinode-348977] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:41:25.679064   25774 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:41:25.679068   25774 notify.go:220] Checking for updates...
	I0912 18:41:25.680532   25774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:41:25.681821   25774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:25.683123   25774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:41:25.684315   25774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:41:25.685748   25774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:41:25.687862   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:41:25.687948   25774 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:41:25.688376   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:41:25.688437   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:41:25.702321   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0912 18:41:25.702721   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:41:25.703222   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:41:25.703249   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:41:25.703593   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:41:25.703770   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:25.738031   25774 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 18:41:25.739353   25774 start.go:298] selected driver: kvm2
	I0912 18:41:25.739367   25774 start.go:902] validating driver "kvm2" against &{Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:41:25.739535   25774 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:41:25.739863   25774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:41:25.739952   25774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3674/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:41:25.754342   25774 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:41:25.755022   25774 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 18:41:25.755067   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:41:25.755081   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:41:25.755090   25774 start_flags.go:321] config:
	{Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0912 18:41:25.755284   25774 iso.go:125] acquiring lock: {Name:mk43b7bcf1553c61ec6315fe7159639653246bdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:41:25.757119   25774 out.go:177] * Starting control plane node multinode-348977 in cluster multinode-348977
	I0912 18:41:25.758385   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:41:25.758412   25774 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0912 18:41:25.758419   25774 cache.go:57] Caching tarball of preloaded images
	I0912 18:41:25.758521   25774 preload.go:174] Found /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:41:25.758535   25774 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:41:25.758693   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:41:25.758878   25774 start.go:365] acquiring machines lock for multinode-348977: {Name:mkb814e9f5e9709f943ea910e0cc7d91215dc74f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:41:25.758921   25774 start.go:369] acquired machines lock for "multinode-348977" in 23.43µs
	I0912 18:41:25.758937   25774 start.go:96] Skipping create...Using existing machine configuration
	I0912 18:41:25.758946   25774 fix.go:54] fixHost starting: 
	I0912 18:41:25.759194   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:41:25.759230   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:41:25.772820   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0912 18:41:25.773260   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:41:25.773721   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:41:25.773744   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:41:25.774050   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:41:25.774207   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:25.774351   25774 main.go:141] libmachine: (multinode-348977) Calling .GetState
	I0912 18:41:25.776006   25774 fix.go:102] recreateIfNeeded on multinode-348977: state=Stopped err=<nil>
	I0912 18:41:25.776027   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	W0912 18:41:25.776184   25774 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 18:41:25.778169   25774 out.go:177] * Restarting existing kvm2 VM for "multinode-348977" ...
	I0912 18:41:25.779423   25774 main.go:141] libmachine: (multinode-348977) Calling .Start
	I0912 18:41:25.779594   25774 main.go:141] libmachine: (multinode-348977) Ensuring networks are active...
	I0912 18:41:25.780345   25774 main.go:141] libmachine: (multinode-348977) Ensuring network default is active
	I0912 18:41:25.780685   25774 main.go:141] libmachine: (multinode-348977) Ensuring network mk-multinode-348977 is active
	I0912 18:41:25.780989   25774 main.go:141] libmachine: (multinode-348977) Getting domain xml...
	I0912 18:41:25.781706   25774 main.go:141] libmachine: (multinode-348977) Creating domain...
	I0912 18:41:26.979765   25774 main.go:141] libmachine: (multinode-348977) Waiting to get IP...
	I0912 18:41:26.980558   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:26.980870   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:26.980946   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:26.980855   25804 retry.go:31] will retry after 279.689815ms: waiting for machine to come up
	I0912 18:41:27.262432   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.262870   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.262898   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.262825   25804 retry.go:31] will retry after 258.456262ms: waiting for machine to come up
	I0912 18:41:27.523376   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.523770   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.523792   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.523714   25804 retry.go:31] will retry after 470.938004ms: waiting for machine to come up
	I0912 18:41:27.996320   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.996767   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.996795   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.996720   25804 retry.go:31] will retry after 597.246886ms: waiting for machine to come up
	I0912 18:41:28.595108   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:28.595555   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:28.595588   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:28.595492   25804 retry.go:31] will retry after 568.569691ms: waiting for machine to come up
	I0912 18:41:29.165136   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:29.165526   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:29.165568   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:29.165431   25804 retry.go:31] will retry after 758.578505ms: waiting for machine to come up
	I0912 18:41:29.925242   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:29.925603   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:29.925635   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:29.925546   25804 retry.go:31] will retry after 859.704183ms: waiting for machine to come up
	I0912 18:41:30.786642   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:30.786967   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:30.787004   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:30.786922   25804 retry.go:31] will retry after 1.183485789s: waiting for machine to come up
	I0912 18:41:31.972095   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:31.972538   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:31.972559   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:31.972512   25804 retry.go:31] will retry after 1.429607271s: waiting for machine to come up
	I0912 18:41:33.403618   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:33.403985   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:33.404016   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:33.403933   25804 retry.go:31] will retry after 1.93373353s: waiting for machine to come up
	I0912 18:41:35.340062   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:35.340437   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:35.340468   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:35.340385   25804 retry.go:31] will retry after 2.736938727s: waiting for machine to come up
	I0912 18:41:38.080033   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:38.080374   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:38.080419   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:38.080335   25804 retry.go:31] will retry after 3.047877472s: waiting for machine to come up
	I0912 18:41:41.129305   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:41.129731   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:41.129764   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:41.129706   25804 retry.go:31] will retry after 4.362757487s: waiting for machine to come up
	I0912 18:41:45.497217   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.497673   25774 main.go:141] libmachine: (multinode-348977) Found IP for machine: 192.168.39.209
	I0912 18:41:45.497700   25774 main.go:141] libmachine: (multinode-348977) Reserving static IP address...
	I0912 18:41:45.497715   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has current primary IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.498115   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "multinode-348977", mac: "52:54:00:38:2d:65", ip: "192.168.39.209"} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.498148   25774 main.go:141] libmachine: (multinode-348977) DBG | skip adding static IP to network mk-multinode-348977 - found existing host DHCP lease matching {name: "multinode-348977", mac: "52:54:00:38:2d:65", ip: "192.168.39.209"}
	I0912 18:41:45.498157   25774 main.go:141] libmachine: (multinode-348977) Reserved static IP address: 192.168.39.209
	I0912 18:41:45.498169   25774 main.go:141] libmachine: (multinode-348977) Waiting for SSH to be available...
	I0912 18:41:45.498196   25774 main.go:141] libmachine: (multinode-348977) DBG | Getting to WaitForSSH function...
	I0912 18:41:45.500347   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.500695   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.500725   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.500838   25774 main.go:141] libmachine: (multinode-348977) DBG | Using SSH client type: external
	I0912 18:41:45.500863   25774 main.go:141] libmachine: (multinode-348977) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa (-rw-------)
	I0912 18:41:45.500903   25774 main.go:141] libmachine: (multinode-348977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:41:45.500923   25774 main.go:141] libmachine: (multinode-348977) DBG | About to run SSH command:
	I0912 18:41:45.500939   25774 main.go:141] libmachine: (multinode-348977) DBG | exit 0
	I0912 18:41:45.586419   25774 main.go:141] libmachine: (multinode-348977) DBG | SSH cmd err, output: <nil>: 
	I0912 18:41:45.586834   25774 main.go:141] libmachine: (multinode-348977) Calling .GetConfigRaw
	I0912 18:41:45.587556   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:45.589868   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.590371   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.590417   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.590668   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:41:45.590853   25774 machine.go:88] provisioning docker machine ...
	I0912 18:41:45.590870   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:45.591068   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.591255   25774 buildroot.go:166] provisioning hostname "multinode-348977"
	I0912 18:41:45.591275   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.591470   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.593702   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.594074   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.594103   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.594218   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.594383   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.594509   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.594632   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.594781   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:45.595274   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:45.595295   25774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348977 && echo "multinode-348977" | sudo tee /etc/hostname
	I0912 18:41:45.722017   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348977
	
	I0912 18:41:45.722046   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.724726   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.725094   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.725130   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.725251   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.725458   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.725619   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.725761   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.725916   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:45.726274   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:45.726292   25774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348977/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:41:45.842816   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:41:45.842841   25774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3674/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3674/.minikube}
	I0912 18:41:45.842857   25774 buildroot.go:174] setting up certificates
	I0912 18:41:45.842865   25774 provision.go:83] configureAuth start
	I0912 18:41:45.842874   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.843162   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:45.845880   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.846268   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.846304   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.846423   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.848394   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.848724   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.848757   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.848868   25774 provision.go:138] copyHostCerts
	I0912 18:41:45.848897   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:41:45.848925   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem, removing ...
	I0912 18:41:45.848930   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:41:45.848994   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem (1078 bytes)
	I0912 18:41:45.849111   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:41:45.849132   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem, removing ...
	I0912 18:41:45.849136   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:41:45.849173   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem (1123 bytes)
	I0912 18:41:45.849235   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:41:45.849258   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem, removing ...
	I0912 18:41:45.849267   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:41:45.849293   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem (1675 bytes)
	I0912 18:41:45.849363   25774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem org=jenkins.multinode-348977 san=[192.168.39.209 192.168.39.209 localhost 127.0.0.1 minikube multinode-348977]
	I0912 18:41:45.937349   25774 provision.go:172] copyRemoteCerts
	I0912 18:41:45.937398   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:41:45.937443   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.940144   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.940452   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.940478   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.940646   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.940826   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.941012   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.941161   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:46.028317   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 18:41:46.028387   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 18:41:46.051259   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 18:41:46.051345   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 18:41:46.073514   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 18:41:46.073587   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 18:41:46.094769   25774 provision.go:86] duration metric: configureAuth took 251.89397ms
	I0912 18:41:46.094791   25774 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:41:46.095009   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:41:46.095035   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:46.095303   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.097707   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.098061   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.098087   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.098202   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.098375   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.098520   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.098678   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.098851   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.099151   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.099166   25774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 18:41:46.212162   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 18:41:46.212182   25774 buildroot.go:70] root file system type: tmpfs
	I0912 18:41:46.212298   25774 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 18:41:46.212318   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.214891   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.215233   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.215263   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.215455   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.215642   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.215791   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.215920   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.216075   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.216522   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.216627   25774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 18:41:46.339328   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 18:41:46.339371   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.341974   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.342333   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.342373   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.342551   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.342746   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.342899   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.343025   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.343217   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.343656   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.343688   25774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 18:41:47.205623   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 18:41:47.205652   25774 machine.go:91] provisioned docker machine in 1.61478511s
	I0912 18:41:47.205663   25774 start.go:300] post-start starting for "multinode-348977" (driver="kvm2")
	I0912 18:41:47.205676   25774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:41:47.205694   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.205995   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:41:47.206022   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.208743   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.209079   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.209103   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.209248   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.209422   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.209594   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.209743   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.295876   25774 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:41:47.299689   25774 command_runner.go:130] > NAME=Buildroot
	I0912 18:41:47.299703   25774 command_runner.go:130] > VERSION=2021.02.12-1-gaa74cea-dirty
	I0912 18:41:47.299708   25774 command_runner.go:130] > ID=buildroot
	I0912 18:41:47.299713   25774 command_runner.go:130] > VERSION_ID=2021.02.12
	I0912 18:41:47.299717   25774 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0912 18:41:47.299906   25774 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:41:47.299927   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/addons for local assets ...
	I0912 18:41:47.299995   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/files for local assets ...
	I0912 18:41:47.300083   25774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> 108482.pem in /etc/ssl/certs
	I0912 18:41:47.300095   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /etc/ssl/certs/108482.pem
	I0912 18:41:47.300182   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 18:41:47.307891   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:41:47.330135   25774 start.go:303] post-start completed in 124.459565ms
	I0912 18:41:47.330151   25774 fix.go:56] fixHost completed within 21.57120518s
	I0912 18:41:47.330168   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.332212   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.332586   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.332620   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.332750   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.332956   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.333101   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.333254   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.333426   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:47.333724   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:47.333735   25774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 18:41:47.443376   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694544107.413556881
	
	I0912 18:41:47.443399   25774 fix.go:206] guest clock: 1694544107.413556881
	I0912 18:41:47.443409   25774 fix.go:219] Guest: 2023-09-12 18:41:47.413556881 +0000 UTC Remote: 2023-09-12 18:41:47.330154345 +0000 UTC m=+21.694086344 (delta=83.402536ms)
	I0912 18:41:47.443449   25774 fix.go:190] guest clock delta is within tolerance: 83.402536ms
	I0912 18:41:47.443457   25774 start.go:83] releasing machines lock for "multinode-348977", held for 21.684524313s
	I0912 18:41:47.443482   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.443730   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:47.446097   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.446567   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.446617   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.446750   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447397   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447575   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447653   25774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:41:47.447692   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.447780   25774 ssh_runner.go:195] Run: cat /version.json
	I0912 18:41:47.447796   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.450306   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450547   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450692   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.450723   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450860   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.451014   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.451041   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.451051   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.451131   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.451226   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.451300   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.451366   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.451417   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.451543   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.556478   25774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 18:41:47.556535   25774 command_runner.go:130] > {"iso_version": "v1.31.0-1694081706-17207", "kicbase_version": "v0.0.40-1693218425-17145", "minikube_version": "v1.31.2", "commit": "1e9174da326b681d7488cd5fad4145a637e5f218"}
	I0912 18:41:47.556664   25774 ssh_runner.go:195] Run: systemctl --version
	I0912 18:41:47.562789   25774 command_runner.go:130] > systemd 247 (247)
	I0912 18:41:47.562819   25774 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0912 18:41:47.563174   25774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 18:41:47.568635   25774 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 18:41:47.568673   25774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:41:47.568744   25774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:41:47.583069   25774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0912 18:41:47.583097   25774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:41:47.583106   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:41:47.583215   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:41:47.600326   25774 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0912 18:41:47.600503   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:41:47.610473   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:41:47.620204   25774 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:41:47.620270   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:41:47.629827   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:41:47.639220   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:41:47.648528   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:41:47.657904   25774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:41:47.668248   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:41:47.678277   25774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:41:47.686527   25774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 18:41:47.686608   25774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:41:47.695327   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:47.797158   25774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:41:47.812584   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:41:47.812657   25774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 18:41:47.826911   25774 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0912 18:41:47.826933   25774 command_runner.go:130] > [Unit]
	I0912 18:41:47.826944   25774 command_runner.go:130] > Description=Docker Application Container Engine
	I0912 18:41:47.826952   25774 command_runner.go:130] > Documentation=https://docs.docker.com
	I0912 18:41:47.826960   25774 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0912 18:41:47.826969   25774 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0912 18:41:47.826978   25774 command_runner.go:130] > StartLimitBurst=3
	I0912 18:41:47.826986   25774 command_runner.go:130] > StartLimitIntervalSec=60
	I0912 18:41:47.826998   25774 command_runner.go:130] > [Service]
	I0912 18:41:47.827006   25774 command_runner.go:130] > Type=notify
	I0912 18:41:47.827015   25774 command_runner.go:130] > Restart=on-failure
	I0912 18:41:47.827032   25774 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0912 18:41:47.827055   25774 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0912 18:41:47.827069   25774 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0912 18:41:47.827082   25774 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0912 18:41:47.827095   25774 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0912 18:41:47.827109   25774 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0912 18:41:47.827127   25774 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0912 18:41:47.827144   25774 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0912 18:41:47.827160   25774 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0912 18:41:47.827169   25774 command_runner.go:130] > ExecStart=
	I0912 18:41:47.827195   25774 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0912 18:41:47.827210   25774 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0912 18:41:47.827222   25774 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0912 18:41:47.827234   25774 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0912 18:41:47.827246   25774 command_runner.go:130] > LimitNOFILE=infinity
	I0912 18:41:47.827266   25774 command_runner.go:130] > LimitNPROC=infinity
	I0912 18:41:47.827278   25774 command_runner.go:130] > LimitCORE=infinity
	I0912 18:41:47.827289   25774 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0912 18:41:47.827299   25774 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0912 18:41:47.827311   25774 command_runner.go:130] > TasksMax=infinity
	I0912 18:41:47.827322   25774 command_runner.go:130] > TimeoutStartSec=0
	I0912 18:41:47.827336   25774 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0912 18:41:47.827346   25774 command_runner.go:130] > Delegate=yes
	I0912 18:41:47.827356   25774 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0912 18:41:47.827363   25774 command_runner.go:130] > KillMode=process
	I0912 18:41:47.827369   25774 command_runner.go:130] > [Install]
	I0912 18:41:47.827385   25774 command_runner.go:130] > WantedBy=multi-user.target
	I0912 18:41:47.827455   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:41:47.849101   25774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 18:41:47.865230   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:41:47.877782   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:41:47.890445   25774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:41:47.918932   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:41:47.930773   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:41:47.947116   25774 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0912 18:41:47.947200   25774 ssh_runner.go:195] Run: which cri-dockerd
	I0912 18:41:47.950521   25774 command_runner.go:130] > /usr/bin/cri-dockerd
	I0912 18:41:47.950648   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 18:41:47.958320   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 18:41:47.973919   25774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 18:41:48.073799   25774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 18:41:48.184968   25774 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 18:41:48.185002   25774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 18:41:48.201823   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:48.299993   25774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 18:41:49.744586   25774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.444560977s)
	I0912 18:41:49.744655   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:41:49.846098   25774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 18:41:49.958418   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:41:50.061865   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:50.173855   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 18:41:50.189825   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:50.290635   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0912 18:41:50.371946   25774 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 18:41:50.372017   25774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 18:41:50.377756   25774 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0912 18:41:50.377774   25774 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 18:41:50.377781   25774 command_runner.go:130] > Device: 16h/22d	Inode: 849         Links: 1
	I0912 18:41:50.377800   25774 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0912 18:41:50.377809   25774 command_runner.go:130] > Access: 2023-09-12 18:41:50.278241513 +0000
	I0912 18:41:50.377817   25774 command_runner.go:130] > Modify: 2023-09-12 18:41:50.278241513 +0000
	I0912 18:41:50.377825   25774 command_runner.go:130] > Change: 2023-09-12 18:41:50.281246019 +0000
	I0912 18:41:50.377831   25774 command_runner.go:130] >  Birth: -
	I0912 18:41:50.377939   25774 start.go:537] Will wait 60s for crictl version
	I0912 18:41:50.377991   25774 ssh_runner.go:195] Run: which crictl
	I0912 18:41:50.381510   25774 command_runner.go:130] > /usr/bin/crictl
	I0912 18:41:50.381786   25774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 18:41:50.424429   25774 command_runner.go:130] > Version:  0.1.0
	I0912 18:41:50.424452   25774 command_runner.go:130] > RuntimeName:  docker
	I0912 18:41:50.424460   25774 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0912 18:41:50.424466   25774 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0912 18:41:50.424724   25774 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0912 18:41:50.424789   25774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 18:41:50.450690   25774 command_runner.go:130] > 24.0.6
	I0912 18:41:50.450956   25774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 18:41:50.476349   25774 command_runner.go:130] > 24.0.6
	I0912 18:41:50.479420   25774 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0912 18:41:50.479460   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:50.482063   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:50.482385   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:50.482420   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:50.482563   25774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 18:41:50.486347   25774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:41:50.497696   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:41:50.497741   25774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 18:41:50.517010   25774 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0912 18:41:50.517029   25774 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 18:41:50.517037   25774 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0912 18:41:50.517049   25774 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0912 18:41:50.517055   25774 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0912 18:41:50.517062   25774 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0912 18:41:50.517072   25774 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0912 18:41:50.517083   25774 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0912 18:41:50.517094   25774 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:41:50.517105   25774 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0912 18:41:50.517185   25774 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0912 18:41:50.517206   25774 docker.go:566] Images already preloaded, skipping extraction
	I0912 18:41:50.517258   25774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 18:41:50.536618   25774 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0912 18:41:50.536638   25774 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0912 18:41:50.536646   25774 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0912 18:41:50.536655   25774 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 18:41:50.536663   25774 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0912 18:41:50.536670   25774 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0912 18:41:50.536682   25774 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0912 18:41:50.536688   25774 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0912 18:41:50.536697   25774 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:41:50.536704   25774 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0912 18:41:50.536730   25774 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0912 18:41:50.536746   25774 cache_images.go:84] Images are preloaded, skipping loading
	I0912 18:41:50.536845   25774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 18:41:50.562638   25774 command_runner.go:130] > cgroupfs
	I0912 18:41:50.562909   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:41:50.562928   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:41:50.562949   25774 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 18:41:50.562976   25774 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-348977 NodeName:multinode-348977 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 18:41:50.563124   25774 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-348977"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 18:41:50.563209   25774 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-348977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 18:41:50.563267   25774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 18:41:50.572176   25774 command_runner.go:130] > kubeadm
	I0912 18:41:50.572195   25774 command_runner.go:130] > kubectl
	I0912 18:41:50.572202   25774 command_runner.go:130] > kubelet
	I0912 18:41:50.572227   25774 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 18:41:50.572284   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 18:41:50.580040   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0912 18:41:50.596061   25774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 18:41:50.611346   25774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0912 18:41:50.627820   25774 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0912 18:41:50.631401   25774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:41:50.643788   25774 certs.go:56] Setting up /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977 for IP: 192.168.39.209
	I0912 18:41:50.643818   25774 certs.go:190] acquiring lock for shared ca certs: {Name:mk2421757d3f1bd81d42ecb091844bc5771a96da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:41:50.643980   25774 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key
	I0912 18:41:50.644020   25774 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key
	I0912 18:41:50.644084   25774 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key
	I0912 18:41:50.644164   25774 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key.c475731a
	I0912 18:41:50.644203   25774 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key
	I0912 18:41:50.644214   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 18:41:50.644226   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 18:41:50.644237   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 18:41:50.644251   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 18:41:50.644263   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 18:41:50.644276   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 18:41:50.644288   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 18:41:50.644299   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 18:41:50.644353   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem (1338 bytes)
	W0912 18:41:50.644381   25774 certs.go:433] ignoring /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848_empty.pem, impossibly tiny 0 bytes
	I0912 18:41:50.644391   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 18:41:50.644411   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem (1078 bytes)
	I0912 18:41:50.644433   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem (1123 bytes)
	I0912 18:41:50.644454   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem (1675 bytes)
	I0912 18:41:50.644488   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:41:50.644515   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.644528   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem -> /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.644540   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.645043   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 18:41:50.669387   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 18:41:50.693124   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 18:41:50.717344   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 18:41:50.741383   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 18:41:50.765260   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 18:41:50.788458   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 18:41:50.812054   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 18:41:50.834458   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 18:41:50.856384   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem --> /usr/share/ca-certificates/10848.pem (1338 bytes)
	I0912 18:41:50.879015   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /usr/share/ca-certificates/108482.pem (1708 bytes)
	I0912 18:41:50.901340   25774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 18:41:50.916988   25774 ssh_runner.go:195] Run: openssl version
	I0912 18:41:50.922215   25774 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0912 18:41:50.922272   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 18:41:50.931174   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935308   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935555   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935591   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.940768   25774 command_runner.go:130] > b5213941
	I0912 18:41:50.940828   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 18:41:50.949933   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10848.pem && ln -fs /usr/share/ca-certificates/10848.pem /etc/ssl/certs/10848.pem"
	I0912 18:41:50.959171   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963298   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 18:25 /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963387   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 18:25 /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963424   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.968525   25774 command_runner.go:130] > 51391683
	I0912 18:41:50.968577   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10848.pem /etc/ssl/certs/51391683.0"
	I0912 18:41:50.977420   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108482.pem && ln -fs /usr/share/ca-certificates/108482.pem /etc/ssl/certs/108482.pem"
	I0912 18:41:50.986499   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990623   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 18:25 /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990805   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 18:25 /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990843   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.996022   25774 command_runner.go:130] > 3ec20f2e
	I0912 18:41:50.996068   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/108482.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 18:41:51.005086   25774 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 18:41:51.009100   25774 command_runner.go:130] > ca.crt
	I0912 18:41:51.009117   25774 command_runner.go:130] > ca.key
	I0912 18:41:51.009124   25774 command_runner.go:130] > healthcheck-client.crt
	I0912 18:41:51.009131   25774 command_runner.go:130] > healthcheck-client.key
	I0912 18:41:51.009139   25774 command_runner.go:130] > peer.crt
	I0912 18:41:51.009144   25774 command_runner.go:130] > peer.key
	I0912 18:41:51.009151   25774 command_runner.go:130] > server.crt
	I0912 18:41:51.009160   25774 command_runner.go:130] > server.key
	I0912 18:41:51.009343   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 18:41:51.014870   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.014914   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 18:41:51.020073   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.020297   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 18:41:51.025479   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.025531   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 18:41:51.030928   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.030980   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 18:41:51.036454   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.036501   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 18:41:51.041830   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.041883   25774 kubeadm.go:404] StartCluster: {Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:41:51.041992   25774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 18:41:51.060635   25774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 18:41:51.069462   25774 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0912 18:41:51.069486   25774 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0912 18:41:51.069493   25774 command_runner.go:130] > /var/lib/minikube/etcd:
	I0912 18:41:51.069497   25774 command_runner.go:130] > member
	I0912 18:41:51.069726   25774 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0912 18:41:51.069759   25774 kubeadm.go:636] restartCluster start
	I0912 18:41:51.069810   25774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 18:41:51.077935   25774 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.078333   25774 kubeconfig.go:135] verify returned: extract IP: "multinode-348977" does not appear in /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:51.078445   25774 kubeconfig.go:146] "multinode-348977" context is missing from /home/jenkins/minikube-integration/17233-3674/kubeconfig - will repair!
	I0912 18:41:51.078733   25774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3674/kubeconfig: {Name:mked094375583bdbe55c31d17add6f22f93c8430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:41:51.079119   25774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:51.079379   25774 kapi.go:59] client config for multinode-348977: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key", CAFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 18:41:51.079954   25774 cert_rotation.go:137] Starting client certificate rotation controller
	I0912 18:41:51.080075   25774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 18:41:51.088002   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.088038   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.098117   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.098131   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.098157   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.109116   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.609873   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.610041   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.622441   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:52.110086   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:52.110176   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:52.121246   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:52.609915   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:52.609995   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:52.621769   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:53.109288   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:53.109378   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:53.120512   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:53.610143   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:53.610216   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:53.621426   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:54.110052   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:54.110138   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:54.121104   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:54.609667   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:54.609758   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:54.621600   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:55.110219   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:55.110305   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:55.121464   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:55.610086   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:55.610163   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:55.623327   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:56.109204   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:56.109279   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:56.120664   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:56.609222   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:56.609302   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:56.620802   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:57.109386   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:57.109488   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:57.120886   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:57.609416   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:57.609490   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:57.621348   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:58.109961   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:58.110033   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:58.121759   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:58.609284   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:58.609358   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:58.620494   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:59.110173   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:59.110270   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:59.121513   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:59.610149   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:59.610234   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:59.621495   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:00.110117   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:42:00.110197   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:42:00.121328   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:00.609937   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:42:00.610014   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:42:00.621204   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:01.088910   25774 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0912 18:42:01.088951   25774 kubeadm.go:1128] stopping kube-system containers ...
	I0912 18:42:01.089019   25774 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 18:42:01.110509   25774 command_runner.go:130] > 43aaf5c3bf6e
	I0912 18:42:01.110528   25774 command_runner.go:130] > 012e61091353
	I0912 18:42:01.110532   25774 command_runner.go:130] > 96a48d1e6808
	I0912 18:42:01.110536   25774 command_runner.go:130] > d9fcb5b50176
	I0912 18:42:01.110541   25774 command_runner.go:130] > 5486463296b7
	I0912 18:42:01.110545   25774 command_runner.go:130] > 7791e737cea3
	I0912 18:42:01.110548   25774 command_runner.go:130] > 1e31cfd643be
	I0912 18:42:01.110552   25774 command_runner.go:130] > 061d1cef513d
	I0912 18:42:01.110557   25774 command_runner.go:130] > 5253cfd31af0
	I0912 18:42:01.110564   25774 command_runner.go:130] > ff41c9b085ad
	I0912 18:42:01.110569   25774 command_runner.go:130] > c0587efa38db
	I0912 18:42:01.110575   25774 command_runner.go:130] > 3627cce96a10
	I0912 18:42:01.110581   25774 command_runner.go:130] > 14cac5d320ea
	I0912 18:42:01.110587   25774 command_runner.go:130] > a0de152dc98d
	I0912 18:42:01.110602   25774 command_runner.go:130] > 7fabc68ca233
	I0912 18:42:01.110615   25774 command_runner.go:130] > e113d197f01f
	I0912 18:42:01.110643   25774 docker.go:462] Stopping containers: [43aaf5c3bf6e 012e61091353 96a48d1e6808 d9fcb5b50176 5486463296b7 7791e737cea3 1e31cfd643be 061d1cef513d 5253cfd31af0 ff41c9b085ad c0587efa38db 3627cce96a10 14cac5d320ea a0de152dc98d 7fabc68ca233 e113d197f01f]
	I0912 18:42:01.110731   25774 ssh_runner.go:195] Run: docker stop 43aaf5c3bf6e 012e61091353 96a48d1e6808 d9fcb5b50176 5486463296b7 7791e737cea3 1e31cfd643be 061d1cef513d 5253cfd31af0 ff41c9b085ad c0587efa38db 3627cce96a10 14cac5d320ea a0de152dc98d 7fabc68ca233 e113d197f01f
	I0912 18:42:01.135827   25774 command_runner.go:130] > 43aaf5c3bf6e
	I0912 18:42:01.135850   25774 command_runner.go:130] > 012e61091353
	I0912 18:42:01.135857   25774 command_runner.go:130] > 96a48d1e6808
	I0912 18:42:01.135864   25774 command_runner.go:130] > d9fcb5b50176
	I0912 18:42:01.135871   25774 command_runner.go:130] > 5486463296b7
	I0912 18:42:01.135876   25774 command_runner.go:130] > 7791e737cea3
	I0912 18:42:01.135880   25774 command_runner.go:130] > 1e31cfd643be
	I0912 18:42:01.135883   25774 command_runner.go:130] > 061d1cef513d
	I0912 18:42:01.135898   25774 command_runner.go:130] > 5253cfd31af0
	I0912 18:42:01.135905   25774 command_runner.go:130] > ff41c9b085ad
	I0912 18:42:01.135914   25774 command_runner.go:130] > c0587efa38db
	I0912 18:42:01.135928   25774 command_runner.go:130] > 3627cce96a10
	I0912 18:42:01.135941   25774 command_runner.go:130] > 14cac5d320ea
	I0912 18:42:01.135948   25774 command_runner.go:130] > a0de152dc98d
	I0912 18:42:01.135954   25774 command_runner.go:130] > 7fabc68ca233
	I0912 18:42:01.135961   25774 command_runner.go:130] > e113d197f01f
	I0912 18:42:01.137164   25774 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 18:42:01.153212   25774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 18:42:01.162065   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0912 18:42:01.162089   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0912 18:42:01.162097   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0912 18:42:01.162104   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:42:01.162134   25774 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:42:01.162182   25774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 18:42:01.171349   25774 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0912 18:42:01.171377   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:01.289145   25774 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 18:42:01.289913   25774 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0912 18:42:01.290490   25774 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0912 18:42:01.291144   25774 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 18:42:01.292064   25774 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0912 18:42:01.292637   25774 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0912 18:42:01.293503   25774 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0912 18:42:01.294172   25774 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0912 18:42:01.294833   25774 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0912 18:42:01.295368   25774 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 18:42:01.296031   25774 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 18:42:01.297818   25774 command_runner.go:130] > [certs] Using the existing "sa" key
	I0912 18:42:01.298323   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.545462   25774 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 18:42:02.545494   25774 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 18:42:02.545504   25774 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 18:42:02.545512   25774 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 18:42:02.545520   25774 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 18:42:02.545606   25774 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2472468s)
	I0912 18:42:02.545640   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.733506   25774 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 18:42:02.733549   25774 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 18:42:02.733559   25774 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0912 18:42:02.733582   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.804619   25774 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 18:42:02.804641   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 18:42:02.809567   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 18:42:02.811026   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 18:42:02.816366   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.884578   25774 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 18:42:02.888100   25774 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:42:02.888191   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:02.904156   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:03.418205   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:03.918314   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:04.418376   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:04.917697   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:05.011521   25774 command_runner.go:130] > 1613
	I0912 18:42:05.012117   25774 api_server.go:72] duration metric: took 2.124014474s to wait for apiserver process to appear ...
	I0912 18:42:05.012146   25774 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:42:05.012167   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:05.012754   25774 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I0912 18:42:05.012783   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:05.013807   25774 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I0912 18:42:05.514175   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.050114   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 18:42:08.050147   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 18:42:08.050161   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.082430   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 18:42:08.082459   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 18:42:08.513959   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.519156   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 18:42:08.519185   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 18:42:09.014802   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:09.019756   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 18:42:09.019792   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 18:42:09.514302   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:09.520638   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:09.520736   25774 round_trippers.go:463] GET https://192.168.39.209:8443/version
	I0912 18:42:09.520751   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:09.520764   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:09.520778   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:09.528622   25774 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 18:42:09.528643   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:09.528652   25774 round_trippers.go:580]     Audit-Id: d7171996-093f-43cf-b1f6-28902f5d151b
	I0912 18:42:09.528659   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:09.528666   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:09.528674   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:09.528683   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:09.528692   25774 round_trippers.go:580]     Content-Length: 263
	I0912 18:42:09.528702   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:09 GMT
	I0912 18:42:09.528733   25774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 18:42:09.528825   25774 api_server.go:141] control plane version: v1.28.1
	I0912 18:42:09.528843   25774 api_server.go:131] duration metric: took 4.516689082s to wait for apiserver health ...
	I0912 18:42:09.528854   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:42:09.528863   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:42:09.530468   25774 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 18:42:09.531871   25774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 18:42:09.537270   25774 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0912 18:42:09.537291   25774 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0912 18:42:09.537299   25774 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0912 18:42:09.537309   25774 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 18:42:09.537318   25774 command_runner.go:130] > Access: 2023-09-12 18:41:39.002927530 +0000
	I0912 18:42:09.537330   25774 command_runner.go:130] > Modify: 2023-09-07 15:52:17.000000000 +0000
	I0912 18:42:09.537339   25774 command_runner.go:130] > Change: 2023-09-12 18:41:36.512921513 +0000
	I0912 18:42:09.537346   25774 command_runner.go:130] >  Birth: -
	I0912 18:42:09.537468   25774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 18:42:09.537490   25774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 18:42:09.570387   25774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 18:42:11.089548   25774 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0912 18:42:11.089572   25774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0912 18:42:11.089578   25774 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0912 18:42:11.089583   25774 command_runner.go:130] > daemonset.apps/kindnet configured
	I0912 18:42:11.089601   25774 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.519192244s)
	I0912 18:42:11.089624   25774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:42:11.089698   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:11.089707   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.089714   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.089720   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.094477   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.094497   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.094506   25774 round_trippers.go:580]     Audit-Id: 28e1b4f7-27a4-4728-9259-012beb5aa7e7
	I0912 18:42:11.094513   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.094519   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.094529   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.094536   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.094545   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.097562   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84576 chars]
	I0912 18:42:11.101281   25774 system_pods.go:59] 12 kube-system pods found
	I0912 18:42:11.101307   25774 system_pods.go:61] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 18:42:11.101315   25774 system_pods.go:61] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 18:42:11.101319   25774 system_pods.go:61] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:11.101324   25774 system_pods.go:61] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:11.101329   25774 system_pods.go:61] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0912 18:42:11.101335   25774 system_pods.go:61] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 18:42:11.101344   25774 system_pods.go:61] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 18:42:11.101349   25774 system_pods.go:61] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:11.101354   25774 system_pods.go:61] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:11.101359   25774 system_pods.go:61] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 18:42:11.101365   25774 system_pods.go:61] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 18:42:11.101374   25774 system_pods.go:61] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 18:42:11.101381   25774 system_pods.go:74] duration metric: took 11.751351ms to wait for pod list to return data ...
	I0912 18:42:11.101392   25774 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:42:11.101439   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:11.101446   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.101454   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.101459   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.105805   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.105819   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.105827   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.105841   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.105847   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.105852   25774 round_trippers.go:580]     Audit-Id: ee581856-dce8-447b-8358-f37a47339ad8
	I0912 18:42:11.105857   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.105862   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.106297   25774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13670 chars]
	I0912 18:42:11.106975   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.106994   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107003   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.107007   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107011   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.107014   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107018   25774 node_conditions.go:105] duration metric: took 5.622718ms to run NodePressure ...
	I0912 18:42:11.107031   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:11.464902   25774 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0912 18:42:11.464923   25774 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0912 18:42:11.464949   25774 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0912 18:42:11.465044   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0912 18:42:11.465058   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.465069   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.465075   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.467829   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.467850   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.467860   25774 round_trippers.go:580]     Audit-Id: 78f00d5a-7eb8-4a9e-b90d-d323283aff0d
	I0912 18:42:11.467868   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.467874   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.467879   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.467884   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.467890   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.468461   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"776"},"items":[{"metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"762","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I0912 18:42:11.469818   25774 kubeadm.go:787] kubelet initialised
	I0912 18:42:11.469838   25774 kubeadm.go:788] duration metric: took 4.877378ms waiting for restarted kubelet to initialise ...
	I0912 18:42:11.469845   25774 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:11.469907   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:11.469918   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.469928   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.469935   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.473358   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:11.473371   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.473376   25774 round_trippers.go:580]     Audit-Id: 1e9316d5-4fa8-4920-a611-94538e5de9d2
	I0912 18:42:11.473382   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.473390   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.473395   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.473400   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.473405   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.475084   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"776"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84576 chars]
	I0912 18:42:11.477518   25774 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.477580   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:11.477588   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.477595   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.477600   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.480258   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.480275   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.480281   25774 round_trippers.go:580]     Audit-Id: 26d11606-643c-4680-9fdd-7c6079a0b9d0
	I0912 18:42:11.480287   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.480292   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.480297   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.480302   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.480307   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.480652   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:11.481023   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.481034   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.481040   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.481046   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.483036   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.483082   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.483091   25774 round_trippers.go:580]     Audit-Id: e8a416f7-552b-4b46-8961-5851964b96f3
	I0912 18:42:11.483096   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.483102   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.483111   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.483120   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.483142   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.483383   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.483638   25774 pod_ready.go:97] node "multinode-348977" hosting pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.483651   25774 pod_ready.go:81] duration metric: took 6.116525ms waiting for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.483658   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.483665   25774 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.483707   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348977
	I0912 18:42:11.483714   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.483721   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.483726   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.485560   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.485573   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.485578   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.485583   25774 round_trippers.go:580]     Audit-Id: 4142a2e4-b055-4e71-a505-4b1655dfe4ed
	I0912 18:42:11.485588   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.485593   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.485598   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.485604   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.485740   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"762","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I0912 18:42:11.486046   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.486056   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.486063   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.486069   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.487681   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.487700   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.487708   25774 round_trippers.go:580]     Audit-Id: 183de2f6-83fd-4668-8968-150418f82b3c
	I0912 18:42:11.487716   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.487725   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.487731   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.487739   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.487744   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.488007   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.488347   25774 pod_ready.go:97] node "multinode-348977" hosting pod "etcd-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.488365   25774 pod_ready.go:81] duration metric: took 4.694293ms waiting for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.488375   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "etcd-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.488396   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.488451   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348977
	I0912 18:42:11.488461   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.488472   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.488485   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.490841   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.490854   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.490860   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.490865   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.490870   25774 round_trippers.go:580]     Audit-Id: d2933280-f99b-440c-a008-24b2a483ce04
	I0912 18:42:11.490875   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.490880   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.490885   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.491841   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348977","namespace":"kube-system","uid":"f540dfd0-b1d9-4e3f-b9ab-f02db770e920","resourceVersion":"763","creationTimestamp":"2023-09-12T18:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.209:8443","kubernetes.io/config.hash":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.mirror":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.seen":"2023-09-12T18:38:05.461231178Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I0912 18:42:11.492324   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.492342   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.492359   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.492373   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.494392   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.494403   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.494409   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.494414   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.494419   25774 round_trippers.go:580]     Audit-Id: 2b3e9b5c-ec6e-47ef-8eb4-42325dd1cadd
	I0912 18:42:11.494425   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.494430   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.494435   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.494702   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.495039   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-apiserver-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.495057   25774 pod_ready.go:81] duration metric: took 6.649671ms waiting for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.495064   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-apiserver-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.495070   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.495114   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348977
	I0912 18:42:11.495121   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.495127   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.495134   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.496898   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.496911   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.496927   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.496938   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.496951   25774 round_trippers.go:580]     Audit-Id: 0f5e0750-0fcd-4f41-abcd-d523c6aae03a
	I0912 18:42:11.496960   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.496973   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.496986   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.497810   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348977","namespace":"kube-system","uid":"930d0357-f21e-4a4e-8c3b-2cff3263568f","resourceVersion":"764","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.mirror":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.seen":"2023-09-12T18:37:56.784236763Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I0912 18:42:11.498183   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.498196   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.498203   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.498209   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.500292   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.500307   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.500316   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.500326   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.500336   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.500351   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.500361   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.500373   25774 round_trippers.go:580]     Audit-Id: d118dc25-8eab-43dd-a453-09eea91ee36a
	I0912 18:42:11.500556   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.500842   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-controller-manager-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.500855   25774 pod_ready.go:81] duration metric: took 5.775247ms waiting for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.500863   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-controller-manager-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.500880   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.690301   25774 request.go:629] Waited for 189.366968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:11.690363   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:11.690369   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.690379   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.690387   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.694882   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.694902   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.694909   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.694914   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.694919   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.694925   25774 round_trippers.go:580]     Audit-Id: 96fbd2df-71d0-42f2-9668-9a4751b3b372
	I0912 18:42:11.694930   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.694943   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.695466   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wfpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"774a14f5-3c1d-4a3b-a265-290361f0fbe3","resourceVersion":"515","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0912 18:42:11.890223   25774 request.go:629] Waited for 194.343656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:11.890300   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:11.890309   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.890325   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.890341   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.893961   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:11.893979   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.893985   25774 round_trippers.go:580]     Audit-Id: d58ddf1c-05d0-4d76-9b86-e75d4563c79f
	I0912 18:42:11.893991   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.893996   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.894003   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.894011   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.894022   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.894278   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m02","uid":"0a11e94b-756b-4c81-9734-627ddcc38b98","resourceVersion":"581","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3266 chars]
	I0912 18:42:11.894534   25774 pod_ready.go:92] pod "kube-proxy-2wfpr" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:11.894547   25774 pod_ready.go:81] duration metric: took 393.659737ms waiting for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.894556   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.089913   25774 request.go:629] Waited for 195.278265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:12.089988   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:12.089997   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.090007   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.090021   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.092533   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.092550   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.092557   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.092563   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.092568   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.092573   25774 round_trippers.go:580]     Audit-Id: ab50bf62-fff6-4392-8010-8f7ac978ac19
	I0912 18:42:12.092578   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.092591   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.092750   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvnqz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d610f9be-c231-4aae-9870-e627ce41bf23","resourceVersion":"736","creationTimestamp":"2023-09-12T18:39:59Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0912 18:42:12.290497   25774 request.go:629] Waited for 197.357363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:12.290566   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:12.290571   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.290578   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.290608   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.293062   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.293078   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.293084   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.293089   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.293094   25774 round_trippers.go:580]     Audit-Id: e4b3f466-3c77-4c24-b13b-af89b75e0355
	I0912 18:42:12.293099   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.293104   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.293108   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.293215   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m03","uid":"03d033eb-43a1-4b37-a2a0-6de70662f3e7","resourceVersion":"753","creationTimestamp":"2023-09-12T18:40:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:40:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0912 18:42:12.293445   25774 pod_ready.go:92] pod "kube-proxy-fvnqz" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:12.293456   25774 pod_ready.go:81] duration metric: took 398.886453ms waiting for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.293465   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.489818   25774 request.go:629] Waited for 196.284343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:12.489872   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:12.489876   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.489884   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.489890   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.492417   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.492438   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.492447   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.492457   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.492465   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.492474   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.492483   25774 round_trippers.go:580]     Audit-Id: 8e58b70d-eb64-4ad7-8f40-d0b9d1828c0c
	I0912 18:42:12.492488   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.493218   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gp457","generateName":"kube-proxy-","namespace":"kube-system","uid":"39d70e08-cba7-4545-a6eb-a2e9152458dc","resourceVersion":"769","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I0912 18:42:12.689962   25774 request.go:629] Waited for 196.319367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:12.690069   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:12.690082   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.690089   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.690095   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.692930   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.692946   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.692952   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.692957   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.692963   25774 round_trippers.go:580]     Audit-Id: 65cc805b-a9c2-4b93-b29d-314f12cbeece
	I0912 18:42:12.692970   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.692978   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.692991   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.693385   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:12.693887   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-proxy-gp457" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:12.693914   25774 pod_ready.go:81] duration metric: took 400.443481ms waiting for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:12.693926   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-proxy-gp457" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:12.693942   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.890377   25774 request.go:629] Waited for 196.363771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:12.890460   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:12.890467   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.890477   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.890486   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.893424   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.893446   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.893456   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.893471   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.893483   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.893490   25774 round_trippers.go:580]     Audit-Id: d470dad8-0297-4d0d-a80e-ac6f86679c42
	I0912 18:42:12.893497   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.893505   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.893673   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348977","namespace":"kube-system","uid":"69ef187d-8c5d-4b26-861e-4a2178c309e7","resourceVersion":"765","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.mirror":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.seen":"2023-09-12T18:37:56.784237754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I0912 18:42:13.090457   25774 request.go:629] Waited for 196.397433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.090511   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.090515   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.090523   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.090532   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.093408   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.093432   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.093443   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.093452   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.093461   25774 round_trippers.go:580]     Audit-Id: 867db361-341c-4413-be9c-31e1e7cc54ab
	I0912 18:42:13.093470   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.093482   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.093490   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.093840   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.094121   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-scheduler-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:13.094136   25774 pod_ready.go:81] duration metric: took 400.181932ms waiting for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:13.094144   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-scheduler-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:13.094153   25774 pod_ready.go:38] duration metric: took 1.62429968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:13.094171   25774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 18:42:13.109771   25774 command_runner.go:130] > -16
	I0912 18:42:13.109820   25774 ops.go:34] apiserver oom_adj: -16
	I0912 18:42:13.109828   25774 kubeadm.go:640] restartCluster took 22.040060524s
	I0912 18:42:13.109838   25774 kubeadm.go:406] StartCluster complete in 22.067960392s
	I0912 18:42:13.109857   25774 settings.go:142] acquiring lock: {Name:mk701ee4b509c72ea6c30dd8b1ed35b0318b6f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:42:13.109946   25774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:42:13.110630   25774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3674/kubeconfig: {Name:mked094375583bdbe55c31d17add6f22f93c8430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:42:13.110874   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 18:42:13.110895   25774 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0912 18:42:13.113800   25774 out.go:177] * Enabled addons: 
	I0912 18:42:13.111083   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:13.111144   25774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:42:13.115047   25774 addons.go:502] enable addons completed in 4.162587ms: enabled=[]
	I0912 18:42:13.115270   25774 kapi.go:59] client config for multinode-348977: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key", CAFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 18:42:13.115629   25774 round_trippers.go:463] GET https://192.168.39.209:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 18:42:13.115643   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.115653   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.115662   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.118334   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.118358   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.118367   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.118375   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.118386   25774 round_trippers.go:580]     Content-Length: 291
	I0912 18:42:13.118396   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.118404   25774 round_trippers.go:580]     Audit-Id: 977f14e1-5a64-4189-aeab-98356e20ae68
	I0912 18:42:13.118415   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.118423   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.118453   25774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"689d8907-7a8c-41b5-a29a-3d911c1eccad","resourceVersion":"775","creationTimestamp":"2023-09-12T18:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0912 18:42:13.118646   25774 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-348977" context rescaled to 1 replicas
	I0912 18:42:13.118678   25774 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 18:42:13.120242   25774 out.go:177] * Verifying Kubernetes components...
	I0912 18:42:13.121523   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:13.243631   25774 command_runner.go:130] > apiVersion: v1
	I0912 18:42:13.243656   25774 command_runner.go:130] > data:
	I0912 18:42:13.243663   25774 command_runner.go:130] >   Corefile: |
	I0912 18:42:13.243669   25774 command_runner.go:130] >     .:53 {
	I0912 18:42:13.243674   25774 command_runner.go:130] >         log
	I0912 18:42:13.243681   25774 command_runner.go:130] >         errors
	I0912 18:42:13.243688   25774 command_runner.go:130] >         health {
	I0912 18:42:13.243695   25774 command_runner.go:130] >            lameduck 5s
	I0912 18:42:13.243700   25774 command_runner.go:130] >         }
	I0912 18:42:13.243708   25774 command_runner.go:130] >         ready
	I0912 18:42:13.243717   25774 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0912 18:42:13.243723   25774 command_runner.go:130] >            pods insecure
	I0912 18:42:13.243736   25774 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0912 18:42:13.243744   25774 command_runner.go:130] >            ttl 30
	I0912 18:42:13.243751   25774 command_runner.go:130] >         }
	I0912 18:42:13.243768   25774 command_runner.go:130] >         prometheus :9153
	I0912 18:42:13.243775   25774 command_runner.go:130] >         hosts {
	I0912 18:42:13.243783   25774 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0912 18:42:13.243791   25774 command_runner.go:130] >            fallthrough
	I0912 18:42:13.243797   25774 command_runner.go:130] >         }
	I0912 18:42:13.243806   25774 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0912 18:42:13.243816   25774 command_runner.go:130] >            max_concurrent 1000
	I0912 18:42:13.243822   25774 command_runner.go:130] >         }
	I0912 18:42:13.243832   25774 command_runner.go:130] >         cache 30
	I0912 18:42:13.243840   25774 command_runner.go:130] >         loop
	I0912 18:42:13.243849   25774 command_runner.go:130] >         reload
	I0912 18:42:13.243856   25774 command_runner.go:130] >         loadbalance
	I0912 18:42:13.243867   25774 command_runner.go:130] >     }
	I0912 18:42:13.243874   25774 command_runner.go:130] > kind: ConfigMap
	I0912 18:42:13.243887   25774 command_runner.go:130] > metadata:
	I0912 18:42:13.243899   25774 command_runner.go:130] >   creationTimestamp: "2023-09-12T18:38:05Z"
	I0912 18:42:13.243906   25774 command_runner.go:130] >   name: coredns
	I0912 18:42:13.243914   25774 command_runner.go:130] >   namespace: kube-system
	I0912 18:42:13.243921   25774 command_runner.go:130] >   resourceVersion: "402"
	I0912 18:42:13.243933   25774 command_runner.go:130] >   uid: 2097770d-506f-410e-985d-435a9559f646
	I0912 18:42:13.246000   25774 node_ready.go:35] waiting up to 6m0s for node "multinode-348977" to be "Ready" ...
	I0912 18:42:13.249599   25774 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0912 18:42:13.290690   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.290724   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.290733   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.290739   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.293505   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.293530   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.293550   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.293559   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.293566   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.293575   25774 round_trippers.go:580]     Audit-Id: 20c8a9a7-ec03-44a7-92e1-ea050ea6d00e
	I0912 18:42:13.293585   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.293593   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.293869   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.490632   25774 request.go:629] Waited for 196.383051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.490691   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.490698   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.490712   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.490725   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.494985   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:13.495009   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.495018   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.495032   25774 round_trippers.go:580]     Audit-Id: 5b0c52ae-aeab-43df-8442-d1ed1c51940b
	I0912 18:42:13.495040   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.495049   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.495062   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.495075   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.495435   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.996558   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.996593   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.996606   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.996614   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.999491   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.999512   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.999521   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.999529   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.999536   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.999544   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.999551   25774 round_trippers.go:580]     Audit-Id: 71109b58-96c6-4648-9138-b568a04bbb01
	I0912 18:42:13.999560   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.000156   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:14.496863   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:14.496885   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:14.496893   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:14.496899   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:14.499602   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:14.499621   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:14.499631   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.499640   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:14.499648   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:14.499657   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:14.499670   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:14 GMT
	I0912 18:42:14.499679   25774 round_trippers.go:580]     Audit-Id: 2f6f7361-0c59-4c7d-8e76-3e32fdddd5c7
	I0912 18:42:14.499882   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:14.996605   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:14.996627   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:14.996635   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:14.996642   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:14.999646   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:14.999672   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:14.999682   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:14 GMT
	I0912 18:42:14.999688   25774 round_trippers.go:580]     Audit-Id: caa22104-8cb4-422b-9f8c-58a2074742d9
	I0912 18:42:14.999693   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.999698   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:14.999703   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:14.999712   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.000041   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:15.496765   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.496794   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.496806   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.496816   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.499654   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:15.499672   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.499679   25774 round_trippers.go:580]     Audit-Id: 770efff3-c863-4db0-aed0-8b0e4f8a7f95
	I0912 18:42:15.499684   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.499689   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.499694   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.499699   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.499704   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.499880   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:15.500255   25774 node_ready.go:49] node "multinode-348977" has status "Ready":"True"
	I0912 18:42:15.500274   25774 node_ready.go:38] duration metric: took 2.254247875s waiting for node "multinode-348977" to be "Ready" ...
	I0912 18:42:15.500284   25774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:15.500345   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:15.500357   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.500368   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.500378   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.503778   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:15.503796   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.503806   25774 round_trippers.go:580]     Audit-Id: 8d81f8ed-046a-432d-a997-45f7f7e48558
	I0912 18:42:15.503816   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.503826   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.503835   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.503840   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.503845   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.506034   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"847"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83986 chars]
	I0912 18:42:15.508504   25774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:15.508567   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:15.508575   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.508582   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.508590   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.510706   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:15.510722   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.510731   25774 round_trippers.go:580]     Audit-Id: c9cdf90d-6c73-47f6-ae2c-89120b937596
	I0912 18:42:15.510739   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.510747   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.510755   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.510763   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.510771   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.511016   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:15.511382   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.511392   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.511399   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.511404   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.512961   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.512976   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.512984   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.512992   25774 round_trippers.go:580]     Audit-Id: cb80b5ee-0f06-45a7-a84b-162ab1d3304c
	I0912 18:42:15.513000   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.513006   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.513011   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.513016   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.513296   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:15.513696   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:15.513711   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.513722   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.513732   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.515569   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.515587   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.515597   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.515604   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.515609   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.515614   25774 round_trippers.go:580]     Audit-Id: d54f597f-34a0-4a2a-8871-cd8c93e54504
	I0912 18:42:15.515619   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.515625   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.515763   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:15.516137   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.516152   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.516161   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.516170   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.517873   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.517885   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.517891   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.517896   25774 round_trippers.go:580]     Audit-Id: 5c3f42bb-785e-463b-8c12-afb92be30ba6
	I0912 18:42:15.517902   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.517911   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.517925   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.517932   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.518198   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:16.019174   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:16.019195   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.019203   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.019209   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.021794   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.021810   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.021824   25774 round_trippers.go:580]     Audit-Id: 9b21fe37-c8aa-4aa9-9f91-f2ff18584580
	I0912 18:42:16.021830   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.021842   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.021854   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.021862   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.021878   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.022301   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:16.022856   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:16.022871   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.022878   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.022884   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.024919   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.024939   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.024949   25774 round_trippers.go:580]     Audit-Id: cc43d89f-08b3-4bc9-a101-2bd12aabeb44
	I0912 18:42:16.024964   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.024977   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.024986   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.024993   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.025004   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.025365   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:16.518991   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:16.519035   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.519045   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.519051   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.521811   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.521835   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.521846   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.521855   25774 round_trippers.go:580]     Audit-Id: bdd70f78-8f74-4b34-8b50-3cdda2609256
	I0912 18:42:16.521865   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.521874   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.521887   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.521899   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.522407   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:16.522902   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:16.522916   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.522923   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.522928   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.525008   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.525028   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.525037   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.525046   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.525062   25774 round_trippers.go:580]     Audit-Id: cf83a590-0ef9-44db-9645-64394ec5153e
	I0912 18:42:16.525070   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.525083   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.525094   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.525483   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.019210   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:17.019234   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.019242   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.019248   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.022180   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.022204   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.022214   25774 round_trippers.go:580]     Audit-Id: 6d0c1aa8-a449-4955-862b-564e063d1920
	I0912 18:42:17.022223   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.022233   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.022240   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.022249   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.022261   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.022965   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:17.023459   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:17.023473   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.023480   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.023486   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.025651   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.025670   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.025679   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.025688   25774 round_trippers.go:580]     Audit-Id: 1a36148e-df68-4ad3-ad5d-a35f5dec8c94
	I0912 18:42:17.025701   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.025709   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.025719   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.025727   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.026038   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.518682   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:17.518705   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.518716   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.518727   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.521775   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:17.521818   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.521829   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.521839   25774 round_trippers.go:580]     Audit-Id: f9326923-a290-4bad-abdf-2105fe92c5b4
	I0912 18:42:17.521848   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.521858   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.521868   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.521881   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.522258   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:17.522690   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:17.522702   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.522709   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.522715   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.525035   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.525050   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.525056   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.525061   25774 round_trippers.go:580]     Audit-Id: 55b1395b-82d0-4899-9ce6-224c280343e7
	I0912 18:42:17.525066   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.525071   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.525077   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.525082   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.525229   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.525477   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:18.018887   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:18.018910   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.018919   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.018925   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.021874   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:18.021896   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.021906   25774 round_trippers.go:580]     Audit-Id: ac7164e8-d532-42d7-8e73-9713804614fe
	I0912 18:42:18.021916   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.021925   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.021934   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.021941   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.021946   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.022226   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:18.022712   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:18.022732   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.022739   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.022745   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.024717   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:18.024730   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.024736   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.024741   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.024746   25774 round_trippers.go:580]     Audit-Id: dfceff8d-4f6c-43e8-bf6a-8631bb9a3cce
	I0912 18:42:18.024751   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.024756   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.024761   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.025289   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:18.518950   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:18.518978   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.518986   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.518992   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.522096   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:18.522119   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.522132   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.522138   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.522143   25774 round_trippers.go:580]     Audit-Id: fa5de5c7-191b-4b2f-beff-478320c3a667
	I0912 18:42:18.522150   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.522158   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.522166   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.522703   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:18.523115   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:18.523126   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.523133   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.523139   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.525574   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:18.525593   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.525599   25774 round_trippers.go:580]     Audit-Id: ab6931c6-4f57-4c7b-b9aa-12d3508c6379
	I0912 18:42:18.525605   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.525610   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.525615   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.525620   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.525625   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.525939   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.018605   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:19.018628   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.018636   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.018642   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.021331   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:19.021351   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.021359   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.021366   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.021374   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.021382   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.021392   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.021414   25774 round_trippers.go:580]     Audit-Id: 27419a73-847c-4f51-bd91-89e052f1edb4
	I0912 18:42:19.021912   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:19.022329   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:19.022342   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.022349   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.022355   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.024339   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:19.024359   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.024368   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.024378   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.024388   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.024395   25774 round_trippers.go:580]     Audit-Id: d84dc941-78b5-4672-a72d-ddd4cb0d7c29
	I0912 18:42:19.024409   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.024418   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.024715   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.519434   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:19.519470   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.519482   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.519491   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.522336   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:19.522358   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.522368   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.522376   25774 round_trippers.go:580]     Audit-Id: 3558f8cc-a921-4e89-84e1-6ac9cde9cd1e
	I0912 18:42:19.522385   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.522394   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.522405   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.522420   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.522736   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:19.523291   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:19.523305   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.523312   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.523317   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.525325   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:19.525338   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.525345   25774 round_trippers.go:580]     Audit-Id: aa86495f-0156-411b-a070-e22a45555259
	I0912 18:42:19.525350   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.525355   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.525361   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.525369   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.525377   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.525749   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.526064   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:20.019472   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:20.019497   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.019509   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.019520   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.022460   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.022483   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.022493   25774 round_trippers.go:580]     Audit-Id: 7716efa4-da02-44ff-bfa0-9dcb7861e619
	I0912 18:42:20.022502   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.022510   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.022518   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.022526   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.022539   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.023124   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:20.023619   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:20.023637   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.023647   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.023654   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.026215   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.026229   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.026235   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.026240   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.026246   25774 round_trippers.go:580]     Audit-Id: fa7b0ca0-8540-4605-82ab-535bcc959a68
	I0912 18:42:20.026254   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.026263   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.026272   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.026670   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:20.519423   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:20.519453   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.519465   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.519475   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.522173   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.522191   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.522201   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.522207   25774 round_trippers.go:580]     Audit-Id: 0e759373-0227-4917-8a2a-ff09025291b0
	I0912 18:42:20.522212   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.522217   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.522222   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.522229   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.522492   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:20.523018   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:20.523039   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.523047   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.523055   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.525064   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:20.525083   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.525093   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.525101   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.525112   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.525123   25774 round_trippers.go:580]     Audit-Id: e9114154-e1ac-42a2-857e-78c0a336e42e
	I0912 18:42:20.525134   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.525145   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.525296   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:21.018922   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:21.018949   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.018962   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.018985   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.021703   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.021729   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.021742   25774 round_trippers.go:580]     Audit-Id: 21cbb700-4503-4ee3-80d7-74b589e29284
	I0912 18:42:21.021750   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.021757   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.021768   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.021774   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.021784   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.022150   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:21.022724   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:21.022739   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.022746   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.022751   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.024834   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.024852   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.024872   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.024890   25774 round_trippers.go:580]     Audit-Id: eb216dd9-7990-4170-99be-ce935ee83b5b
	I0912 18:42:21.024898   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.024906   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.024912   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.024917   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.025328   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:21.519273   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:21.519299   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.519312   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.519321   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.521761   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.521787   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.521797   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.521804   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.521810   25774 round_trippers.go:580]     Audit-Id: df785d50-875d-4b1e-b8b8-fa57c4b91949
	I0912 18:42:21.521815   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.521820   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.521826   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.522091   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:21.522763   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:21.522783   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.522795   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.522804   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.525017   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.525035   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.525042   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.525047   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.525056   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.525062   25774 round_trippers.go:580]     Audit-Id: 5f96ccf0-b25d-421e-8606-0097faa881df
	I0912 18:42:21.525067   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.525072   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.525186   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:22.018823   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:22.018846   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.018854   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.018861   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.021780   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.021805   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.021816   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.021823   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.021829   25774 round_trippers.go:580]     Audit-Id: 6d1f4828-16d5-4d36-9236-531d7c6463cb
	I0912 18:42:22.021834   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.021840   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.021845   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.022058   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:22.022637   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:22.022651   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.022658   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.022666   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.025235   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.025254   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.025264   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.025274   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.025289   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.025298   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.025307   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.025314   25774 round_trippers.go:580]     Audit-Id: 28240cd4-59d0-4af1-9428-aa9546fb2eb2
	I0912 18:42:22.025722   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:22.025994   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:22.519498   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:22.519533   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.519545   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.519615   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.522127   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.522155   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.522165   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.522173   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.522182   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.522190   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.522198   25774 round_trippers.go:580]     Audit-Id: 72500589-0df9-4e05-a284-4aab07bc1a90
	I0912 18:42:22.522205   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.522539   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:22.523066   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:22.523081   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.523088   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.523093   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.526415   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:22.526434   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.526444   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.526452   25774 round_trippers.go:580]     Audit-Id: aba69887-23b0-4ad3-9591-255738e5c9cd
	I0912 18:42:22.526472   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.526480   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.526489   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.526497   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.526919   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:23.018648   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:23.018677   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.018688   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.018697   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.021371   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.021393   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.021401   25774 round_trippers.go:580]     Audit-Id: 308ed084-779c-4b7e-a6f9-8d335954f26d
	I0912 18:42:23.021409   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.021417   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.021426   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.021433   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.021442   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.021652   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:23.022261   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:23.022276   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.022283   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.022296   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.024418   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.024438   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.024447   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.024460   25774 round_trippers.go:580]     Audit-Id: f8385c41-1ac3-4937-8e08-0853d2f07b61
	I0912 18:42:23.024472   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.024480   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.024493   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.024502   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.024636   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:23.519355   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:23.519379   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.519387   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.519393   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.521923   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.521935   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.521941   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.521946   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.521952   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.521961   25774 round_trippers.go:580]     Audit-Id: 8df3aec7-70dd-46ad-9b0c-60e47006f66d
	I0912 18:42:23.521967   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.521972   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.522313   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:23.522786   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:23.522800   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.522807   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.522813   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.525090   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.525102   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.525108   25774 round_trippers.go:580]     Audit-Id: c5a51933-8848-4d9d-86b8-cfa9a1715c83
	I0912 18:42:23.525113   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.525118   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.525123   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.525128   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.525133   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.525525   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.019244   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:24.019267   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.019275   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.019281   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.022149   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.022178   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.022184   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.022190   25774 round_trippers.go:580]     Audit-Id: 0e54f509-fea2-4447-95e7-0adef2cc4a71
	I0912 18:42:24.022195   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.022206   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.022214   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.022224   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.022566   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:24.023022   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:24.023034   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.023041   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.023047   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.025255   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.025267   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.025273   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.025278   25774 round_trippers.go:580]     Audit-Id: 4012be6b-a1d1-4822-8153-07785c0f087c
	I0912 18:42:24.025285   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.025290   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.025295   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.025300   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.025513   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.519223   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:24.519245   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.519253   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.519259   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.521740   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.521759   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.521769   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.521778   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.521806   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.521820   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.521828   25774 round_trippers.go:580]     Audit-Id: dd7800e3-c33b-4d72-b2a4-1f435c3588d6
	I0912 18:42:24.521838   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.522426   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:24.522882   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:24.522894   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.522904   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.522916   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.524935   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.524951   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.524966   25774 round_trippers.go:580]     Audit-Id: e70caa01-d717-4aa8-b453-e6b24304ecb7
	I0912 18:42:24.524975   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.524987   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.524992   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.524997   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.525003   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.525181   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.525544   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:25.018770   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:25.018797   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.018809   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.018818   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.021606   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.021658   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.021669   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.021678   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.021685   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.021697   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.021709   25774 round_trippers.go:580]     Audit-Id: a707fbe4-5e5a-4a76-9552-ae18693b3ade
	I0912 18:42:25.021718   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.023780   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:25.024379   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:25.024398   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.024408   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.024426   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.026655   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.026674   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.026683   25774 round_trippers.go:580]     Audit-Id: 1d90b0fb-50df-4d38-bf46-db8bc42a342b
	I0912 18:42:25.026691   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.026701   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.026709   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.026718   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.026726   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.026889   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:25.519646   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:25.519678   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.519688   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.519694   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.522377   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.522402   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.522425   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.522434   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.522443   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.522455   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.522465   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.522475   25774 round_trippers.go:580]     Audit-Id: 92944b9c-849f-477a-8160-683445d1d4a8
	I0912 18:42:25.523055   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:25.523492   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:25.523505   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.523512   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.523517   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.526015   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.526034   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.526046   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.526055   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.526071   25774 round_trippers.go:580]     Audit-Id: 1ee1e622-a49f-4d1d-bc0d-11e709dd8dda
	I0912 18:42:25.526079   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.526090   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.526096   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.526218   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:26.019099   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:26.019117   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.019125   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.019131   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.022416   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:26.022440   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.022450   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.022456   25774 round_trippers.go:580]     Audit-Id: 49021994-f425-426a-b645-d11b0bef6ff2
	I0912 18:42:26.022461   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.022469   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.022474   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.022480   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.023077   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:26.023486   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:26.023497   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.023504   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.023509   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.026077   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.026094   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.026104   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.026111   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.026118   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.026126   25774 round_trippers.go:580]     Audit-Id: 52a298de-ed90-4986-b7be-58541206edef
	I0912 18:42:26.026135   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.026145   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.026364   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:26.519031   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:26.519052   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.519060   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.519067   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.521656   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.521678   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.521686   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.521691   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.521696   25774 round_trippers.go:580]     Audit-Id: 0ff6011d-720e-449b-8eb1-46b0e14ea217
	I0912 18:42:26.521701   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.521706   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.521711   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.522134   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:26.522540   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:26.522554   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.522560   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.522566   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.524786   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.524800   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.524806   25774 round_trippers.go:580]     Audit-Id: 2d945f25-12df-4442-8171-8110b7ec953e
	I0912 18:42:26.524811   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.524819   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.524827   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.524836   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.524845   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.525119   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.018765   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:27.018794   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.018805   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.018815   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.025520   25774 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0912 18:42:27.025543   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.025553   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.025560   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.025567   25774 round_trippers.go:580]     Audit-Id: 6018ab1b-6d81-43ce-9088-f6d64d3ef8f9
	I0912 18:42:27.025576   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.025584   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.025591   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.025734   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"882","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6722 chars]
	I0912 18:42:27.026159   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.026170   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.026177   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.026183   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.029454   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:27.029469   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.029476   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.029481   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.029486   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.029491   25774 round_trippers.go:580]     Audit-Id: 058f3ee6-b56c-4d93-b76e-c92601975585
	I0912 18:42:27.029497   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.029506   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.029625   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.029892   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:27.519325   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:27.519347   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.519355   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.519361   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.521737   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.521753   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.521760   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.521765   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.521771   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.521776   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.521781   25774 round_trippers.go:580]     Audit-Id: 0685a0df-f7eb-4093-ab97-48796cc84165
	I0912 18:42:27.521789   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.522261   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0912 18:42:27.522688   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.522699   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.522706   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.522712   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.524650   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.524669   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.524679   25774 round_trippers.go:580]     Audit-Id: fd53992f-f915-482b-91e2-7915a59fa965
	I0912 18:42:27.524688   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.524696   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.524707   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.524715   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.524739   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.525068   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.525328   25774 pod_ready.go:92] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.525341   25774 pod_ready.go:81] duration metric: took 12.016818518s waiting for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.525348   25774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.525392   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348977
	I0912 18:42:27.525399   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.525406   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.525411   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.527348   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.527362   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.527368   25774 round_trippers.go:580]     Audit-Id: 7c82a007-a8b9-458c-b72f-b5158f5d9f79
	I0912 18:42:27.527373   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.527379   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.527384   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.527392   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.527397   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.527569   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"870","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0912 18:42:27.527970   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.527988   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.527999   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.528008   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.529544   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.529556   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.529562   25774 round_trippers.go:580]     Audit-Id: 49c08a23-43c6-4b36-97cd-cdf623268d39
	I0912 18:42:27.529567   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.529572   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.529580   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.529585   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.529590   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.529750   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.530037   25774 pod_ready.go:92] pod "etcd-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.530051   25774 pod_ready.go:81] duration metric: took 4.69789ms waiting for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.530068   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.530109   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348977
	I0912 18:42:27.530119   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.530129   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.530140   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.532020   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.532031   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.532036   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.532041   25774 round_trippers.go:580]     Audit-Id: 69b6d28a-cc81-4865-a415-98d5e4ab2e88
	I0912 18:42:27.532046   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.532052   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.532061   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.532066   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.532210   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348977","namespace":"kube-system","uid":"f540dfd0-b1d9-4e3f-b9ab-f02db770e920","resourceVersion":"857","creationTimestamp":"2023-09-12T18:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.209:8443","kubernetes.io/config.hash":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.mirror":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.seen":"2023-09-12T18:38:05.461231178Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I0912 18:42:27.532613   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.532626   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.532633   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.532639   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.534337   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.534348   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.534354   25774 round_trippers.go:580]     Audit-Id: d8ed022c-9bdc-426c-8417-9bdbab3e0568
	I0912 18:42:27.534359   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.534364   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.534368   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.534373   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.534378   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.534556   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.534912   25774 pod_ready.go:92] pod "kube-apiserver-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.534932   25774 pod_ready.go:81] duration metric: took 4.857194ms waiting for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.534941   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.535010   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348977
	I0912 18:42:27.535020   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.535026   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.535032   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.536478   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.536489   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.536498   25774 round_trippers.go:580]     Audit-Id: dc26cd07-5e2b-418c-81e4-ed7f5f4cea37
	I0912 18:42:27.536506   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.536520   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.536528   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.536540   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.536552   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.536872   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348977","namespace":"kube-system","uid":"930d0357-f21e-4a4e-8c3b-2cff3263568f","resourceVersion":"851","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.mirror":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.seen":"2023-09-12T18:37:56.784236763Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I0912 18:42:27.537190   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.537201   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.537208   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.537213   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.539168   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.539187   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.539196   25774 round_trippers.go:580]     Audit-Id: 96077995-eaf1-4ae5-816a-8a44fe54d0e0
	I0912 18:42:27.539205   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.539217   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.539225   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.539236   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.539247   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.539389   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.539705   25774 pod_ready.go:92] pod "kube-controller-manager-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.539721   25774 pod_ready.go:81] duration metric: took 4.774197ms waiting for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.539730   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.539778   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:27.539785   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.539792   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.539797   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.541391   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.541405   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.541412   25774 round_trippers.go:580]     Audit-Id: d5988258-541c-4b62-b811-17340c9d4c61
	I0912 18:42:27.541417   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.541422   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.541429   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.541436   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.541443   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.541635   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wfpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"774a14f5-3c1d-4a3b-a265-290361f0fbe3","resourceVersion":"515","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0912 18:42:27.541939   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:27.541951   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.541957   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.541962   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.543735   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.543753   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.543762   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.543770   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.543779   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.543787   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.543795   25774 round_trippers.go:580]     Audit-Id: c439d5a2-848f-462c-8997-8b09354202f6
	I0912 18:42:27.543803   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.544002   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m02","uid":"0a11e94b-756b-4c81-9734-627ddcc38b98","resourceVersion":"581","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3266 chars]
	I0912 18:42:27.544241   25774 pod_ready.go:92] pod "kube-proxy-2wfpr" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.544255   25774 pod_ready.go:81] duration metric: took 4.520204ms waiting for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.544264   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.719627   25774 request.go:629] Waited for 175.317143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:27.719692   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:27.719702   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.719713   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.719724   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.722697   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.722720   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.722730   25774 round_trippers.go:580]     Audit-Id: d3483f79-1d80-489b-9726-e0bcfc0757be
	I0912 18:42:27.722738   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.722746   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.722754   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.722762   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.722770   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.722965   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvnqz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d610f9be-c231-4aae-9870-e627ce41bf23","resourceVersion":"736","creationTimestamp":"2023-09-12T18:39:59Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0912 18:42:27.919793   25774 request.go:629] Waited for 196.375026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:27.919854   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:27.919859   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.919866   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.919873   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.922352   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.922369   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.922376   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.922381   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.922386   25774 round_trippers.go:580]     Audit-Id: 18dc451a-97b4-4669-a8d1-fe83de2c3208
	I0912 18:42:27.922391   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.922396   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.922401   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.922552   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m03","uid":"03d033eb-43a1-4b37-a2a0-6de70662f3e7","resourceVersion":"753","creationTimestamp":"2023-09-12T18:40:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:40:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0912 18:42:27.922880   25774 pod_ready.go:92] pod "kube-proxy-fvnqz" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.922898   25774 pod_ready.go:81] duration metric: took 378.627886ms waiting for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.922913   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.120317   25774 request.go:629] Waited for 197.342872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:28.120378   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:28.120383   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.120397   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.120412   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.123127   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.123147   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.123154   25774 round_trippers.go:580]     Audit-Id: 091153d1-359b-4f12-a3a3-ccdbdc81297d
	I0912 18:42:28.123160   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.123165   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.123170   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.123175   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.123181   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.123500   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gp457","generateName":"kube-proxy-","namespace":"kube-system","uid":"39d70e08-cba7-4545-a6eb-a2e9152458dc","resourceVersion":"844","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I0912 18:42:28.320266   25774 request.go:629] Waited for 196.341863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.320310   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.320315   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.320322   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.320328   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.322784   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.322801   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.322807   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.322812   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.322817   25774 round_trippers.go:580]     Audit-Id: 7a971df4-048d-411f-84d5-edeca5d0a808
	I0912 18:42:28.322822   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.322830   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.322838   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.323280   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:28.323558   25774 pod_ready.go:92] pod "kube-proxy-gp457" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:28.323569   25774 pod_ready.go:81] duration metric: took 400.650162ms waiting for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.323577   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.519997   25774 request.go:629] Waited for 196.359932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:28.520052   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:28.520057   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.520064   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.520070   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.522614   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.522643   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.522651   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.522657   25774 round_trippers.go:580]     Audit-Id: 81598ada-aa63-48f8-bbb7-bb3b59d03fca
	I0912 18:42:28.522663   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.522671   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.522676   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.522682   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.523030   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348977","namespace":"kube-system","uid":"69ef187d-8c5d-4b26-861e-4a2178c309e7","resourceVersion":"850","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.mirror":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.seen":"2023-09-12T18:37:56.784237754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I0912 18:42:28.719797   25774 request.go:629] Waited for 196.343169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.719852   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.719857   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.719864   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.719870   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.722628   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.722647   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.722657   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.722665   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.722670   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.722690   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.722704   25774 round_trippers.go:580]     Audit-Id: 09261421-b5b9-47f1-8400-375ba280b4aa
	I0912 18:42:28.722709   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.723026   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:28.723300   25774 pod_ready.go:92] pod "kube-scheduler-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:28.723312   25774 pod_ready.go:81] duration metric: took 399.729056ms waiting for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.723321   25774 pod_ready.go:38] duration metric: took 13.223027127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:28.723336   25774 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:42:28.723377   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:28.736121   25774 command_runner.go:130] > 1613
	I0912 18:42:28.736176   25774 api_server.go:72] duration metric: took 15.617469319s to wait for apiserver process to appear ...
	I0912 18:42:28.736186   25774 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:42:28.736202   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:28.742568   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:28.742691   25774 round_trippers.go:463] GET https://192.168.39.209:8443/version
	I0912 18:42:28.742706   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.742717   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.742742   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.743597   25774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0912 18:42:28.743611   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.743617   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.743622   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.743628   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.743635   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.743644   25774 round_trippers.go:580]     Content-Length: 263
	I0912 18:42:28.743652   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.743664   25774 round_trippers.go:580]     Audit-Id: 1f819583-ec91-4248-8d1d-f0faa5cdc977
	I0912 18:42:28.743686   25774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 18:42:28.743734   25774 api_server.go:141] control plane version: v1.28.1
	I0912 18:42:28.743746   25774 api_server.go:131] duration metric: took 7.554171ms to wait for apiserver health ...
	I0912 18:42:28.743753   25774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:42:28.920155   25774 request.go:629] Waited for 176.33099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:28.920222   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:28.920228   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.920239   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.920248   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.924609   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:28.924625   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.924631   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.924637   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.924642   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.924647   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.924652   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.924657   25774 round_trippers.go:580]     Audit-Id: 61199736-c30a-4f20-a0fe-85ab567c6748
	I0912 18:42:28.926078   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0912 18:42:28.928488   25774 system_pods.go:59] 12 kube-system pods found
	I0912 18:42:28.928508   25774 system_pods.go:61] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running
	I0912 18:42:28.928516   25774 system_pods.go:61] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running
	I0912 18:42:28.928521   25774 system_pods.go:61] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:28.928529   25774 system_pods.go:61] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:28.928543   25774 system_pods.go:61] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running
	I0912 18:42:28.928549   25774 system_pods.go:61] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running
	I0912 18:42:28.928556   25774 system_pods.go:61] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running
	I0912 18:42:28.928564   25774 system_pods.go:61] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:28.928568   25774 system_pods.go:61] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:28.928575   25774 system_pods.go:61] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running
	I0912 18:42:28.928579   25774 system_pods.go:61] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running
	I0912 18:42:28.928583   25774 system_pods.go:61] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running
	I0912 18:42:28.928589   25774 system_pods.go:74] duration metric: took 184.827503ms to wait for pod list to return data ...
	I0912 18:42:28.928596   25774 default_sa.go:34] waiting for default service account to be created ...
	I0912 18:42:29.120018   25774 request.go:629] Waited for 191.358708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/default/serviceaccounts
	I0912 18:42:29.120097   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/default/serviceaccounts
	I0912 18:42:29.120104   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.120112   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.120126   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.123049   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:29.123069   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.123079   25774 round_trippers.go:580]     Content-Length: 261
	I0912 18:42:29.123088   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.123097   25774 round_trippers.go:580]     Audit-Id: 0cac5d85-cbe1-4c12-91ee-4a50deb388eb
	I0912 18:42:29.123106   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.123115   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.123122   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.123128   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.123154   25774 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"55ef2ca3-3fa0-482c-9704-129c61fdc121","resourceVersion":"365","creationTimestamp":"2023-09-12T18:38:17Z"}}]}
	I0912 18:42:29.123368   25774 default_sa.go:45] found service account: "default"
	I0912 18:42:29.123387   25774 default_sa.go:55] duration metric: took 194.785544ms for default service account to be created ...
	I0912 18:42:29.123402   25774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 18:42:29.319837   25774 request.go:629] Waited for 196.373018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:29.319891   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:29.319922   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.319951   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.319971   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.324234   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:29.324257   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.324267   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.324275   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.324283   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.324293   25774 round_trippers.go:580]     Audit-Id: 30f84bd7-2fdc-4719-8ffc-f3f8ff44f576
	I0912 18:42:29.324301   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.324310   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.325766   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0912 18:42:29.328222   25774 system_pods.go:86] 12 kube-system pods found
	I0912 18:42:29.328242   25774 system_pods.go:89] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running
	I0912 18:42:29.328247   25774 system_pods.go:89] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running
	I0912 18:42:29.328252   25774 system_pods.go:89] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:29.328257   25774 system_pods.go:89] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:29.328263   25774 system_pods.go:89] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running
	I0912 18:42:29.328270   25774 system_pods.go:89] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running
	I0912 18:42:29.328277   25774 system_pods.go:89] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running
	I0912 18:42:29.328292   25774 system_pods.go:89] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:29.328298   25774 system_pods.go:89] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:29.328302   25774 system_pods.go:89] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running
	I0912 18:42:29.328307   25774 system_pods.go:89] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running
	I0912 18:42:29.328310   25774 system_pods.go:89] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running
	I0912 18:42:29.328316   25774 system_pods.go:126] duration metric: took 204.909135ms to wait for k8s-apps to be running ...
	I0912 18:42:29.328325   25774 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 18:42:29.328370   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:29.341359   25774 system_svc.go:56] duration metric: took 13.030228ms WaitForService to wait for kubelet.
	I0912 18:42:29.341381   25774 kubeadm.go:581] duration metric: took 16.222676844s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 18:42:29.341399   25774 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:42:29.519828   25774 request.go:629] Waited for 178.364112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:29.519911   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:29.519918   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.519929   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.519940   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.522725   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:29.522742   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.522749   25774 round_trippers.go:580]     Audit-Id: e00e13ba-2677-4829-b344-8ada38a7e166
	I0912 18:42:29.522755   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.522762   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.522770   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.522782   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.522797   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.523112   25774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
	I0912 18:42:29.523631   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523649   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523658   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523664   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523677   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523686   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523694   25774 node_conditions.go:105] duration metric: took 182.290333ms to run NodePressure ...
	I0912 18:42:29.523707   25774 start.go:228] waiting for startup goroutines ...
	I0912 18:42:29.523715   25774 start.go:233] waiting for cluster config update ...
	I0912 18:42:29.523724   25774 start.go:242] writing updated cluster config ...
	I0912 18:42:29.524158   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:29.524248   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:29.527653   25774 out.go:177] * Starting worker node multinode-348977-m02 in cluster multinode-348977
	I0912 18:42:29.529157   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:42:29.529180   25774 cache.go:57] Caching tarball of preloaded images
	I0912 18:42:29.529277   25774 preload.go:174] Found /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:42:29.529288   25774 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:42:29.529376   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:29.529545   25774 start.go:365] acquiring machines lock for multinode-348977-m02: {Name:mkb814e9f5e9709f943ea910e0cc7d91215dc74f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:42:29.529588   25774 start.go:369] acquired machines lock for "multinode-348977-m02" in 23.462µs
	I0912 18:42:29.529606   25774 start.go:96] Skipping create...Using existing machine configuration
	I0912 18:42:29.529615   25774 fix.go:54] fixHost starting: m02
	I0912 18:42:29.529896   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:29.529918   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:29.543842   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0912 18:42:29.544256   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:29.544682   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:42:29.544708   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:29.544985   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:29.545132   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:29.545265   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetState
	I0912 18:42:29.546866   25774 fix.go:102] recreateIfNeeded on multinode-348977-m02: state=Stopped err=<nil>
	I0912 18:42:29.546891   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	W0912 18:42:29.547062   25774 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 18:42:29.548960   25774 out.go:177] * Restarting existing kvm2 VM for "multinode-348977-m02" ...
	I0912 18:42:29.550233   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .Start
	I0912 18:42:29.550396   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring networks are active...
	I0912 18:42:29.551115   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring network default is active
	I0912 18:42:29.551433   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring network mk-multinode-348977 is active
	I0912 18:42:29.551771   25774 main.go:141] libmachine: (multinode-348977-m02) Getting domain xml...
	I0912 18:42:29.552344   25774 main.go:141] libmachine: (multinode-348977-m02) Creating domain...
	I0912 18:42:30.767498   25774 main.go:141] libmachine: (multinode-348977-m02) Waiting to get IP...
	I0912 18:42:30.768372   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:30.768756   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:30.768796   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:30.768731   26026 retry.go:31] will retry after 235.940556ms: waiting for machine to come up
	I0912 18:42:31.006160   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.006647   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.006677   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.006603   26026 retry.go:31] will retry after 364.360851ms: waiting for machine to come up
	I0912 18:42:31.372196   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.372728   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.372759   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.372673   26026 retry.go:31] will retry after 381.551229ms: waiting for machine to come up
	I0912 18:42:31.756143   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.756569   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.756596   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.756516   26026 retry.go:31] will retry after 467.043566ms: waiting for machine to come up
	I0912 18:42:32.225092   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:32.225542   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:32.225565   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:32.225522   26026 retry.go:31] will retry after 717.918575ms: waiting for machine to come up
	I0912 18:42:32.944665   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:32.944984   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:32.945013   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:32.944938   26026 retry.go:31] will retry after 777.588344ms: waiting for machine to come up
	I0912 18:42:33.723615   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:33.724005   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:33.724028   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:33.723989   26026 retry.go:31] will retry after 1.005231305s: waiting for machine to come up
	I0912 18:42:34.730358   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:34.730734   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:34.730770   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:34.730686   26026 retry.go:31] will retry after 958.78563ms: waiting for machine to come up
	I0912 18:42:35.690983   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:35.691399   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:35.691421   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:35.691373   26026 retry.go:31] will retry after 1.539184895s: waiting for machine to come up
	I0912 18:42:37.231731   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:37.232165   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:37.232197   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:37.232143   26026 retry.go:31] will retry after 2.237252703s: waiting for machine to come up
	I0912 18:42:39.472512   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:39.472959   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:39.473011   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:39.472905   26026 retry.go:31] will retry after 2.152692302s: waiting for machine to come up
	I0912 18:42:41.627680   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:41.628098   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:41.628133   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:41.628032   26026 retry.go:31] will retry after 2.890854285s: waiting for machine to come up
	I0912 18:42:44.521895   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:44.522238   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:44.522262   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:44.522192   26026 retry.go:31] will retry after 2.979799431s: waiting for machine to come up
	I0912 18:42:47.505585   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.506105   25774 main.go:141] libmachine: (multinode-348977-m02) Found IP for machine: 192.168.39.55
	I0912 18:42:47.506134   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has current primary IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.506144   25774 main.go:141] libmachine: (multinode-348977-m02) Reserving static IP address...
	I0912 18:42:47.506564   25774 main.go:141] libmachine: (multinode-348977-m02) Reserved static IP address: 192.168.39.55
	I0912 18:42:47.506615   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "multinode-348977-m02", mac: "52:54:00:fb:c0:ce", ip: "192.168.39.55"} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.506635   25774 main.go:141] libmachine: (multinode-348977-m02) Waiting for SSH to be available...
	I0912 18:42:47.506659   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | skip adding static IP to network mk-multinode-348977 - found existing host DHCP lease matching {name: "multinode-348977-m02", mac: "52:54:00:fb:c0:ce", ip: "192.168.39.55"}
	I0912 18:42:47.506681   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Getting to WaitForSSH function...
	I0912 18:42:47.508611   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.508965   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.508992   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.509119   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Using SSH client type: external
	I0912 18:42:47.509153   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa (-rw-------)
	I0912 18:42:47.509178   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:42:47.509190   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | About to run SSH command:
	I0912 18:42:47.509201   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | exit 0
	I0912 18:42:47.594719   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | SSH cmd err, output: <nil>: 
	I0912 18:42:47.595034   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetConfigRaw
	I0912 18:42:47.595656   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:47.598153   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.598542   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.598576   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.598809   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:47.599008   25774 machine.go:88] provisioning docker machine ...
	I0912 18:42:47.599027   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:47.599233   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.599393   25774 buildroot.go:166] provisioning hostname "multinode-348977-m02"
	I0912 18:42:47.599410   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.599573   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.601705   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.602082   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.602107   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.602240   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:47.602444   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.602620   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.602777   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:47.602919   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:47.603241   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:47.603262   25774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348977-m02 && echo "multinode-348977-m02" | sudo tee /etc/hostname
	I0912 18:42:47.727967   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348977-m02
	
	I0912 18:42:47.727992   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.730980   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.731324   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.731357   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.731546   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:47.731734   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.731942   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.732071   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:47.732251   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:47.732720   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:47.732751   25774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348977-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348977-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348977-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:42:47.851882   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:42:47.851910   25774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3674/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3674/.minikube}
	I0912 18:42:47.851930   25774 buildroot.go:174] setting up certificates
	I0912 18:42:47.851944   25774 provision.go:83] configureAuth start
	I0912 18:42:47.851961   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.852222   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:47.854839   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.855194   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.855226   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.855337   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.857401   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.857747   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.857778   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.857894   25774 provision.go:138] copyHostCerts
	I0912 18:42:47.857926   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:42:47.857965   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem, removing ...
	I0912 18:42:47.857979   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:42:47.858051   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem (1123 bytes)
	I0912 18:42:47.858137   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:42:47.858162   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem, removing ...
	I0912 18:42:47.858172   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:42:47.858209   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem (1675 bytes)
	I0912 18:42:47.858270   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:42:47.858293   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem, removing ...
	I0912 18:42:47.858300   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:42:47.858334   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem (1078 bytes)
	I0912 18:42:47.858394   25774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem org=jenkins.multinode-348977-m02 san=[192.168.39.55 192.168.39.55 localhost 127.0.0.1 minikube multinode-348977-m02]
	I0912 18:42:48.213648   25774 provision.go:172] copyRemoteCerts
	I0912 18:42:48.213711   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:42:48.213739   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.216496   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.216875   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.216910   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.217086   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.217304   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.217440   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.217540   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:48.299272   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 18:42:48.299343   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 18:42:48.323067   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 18:42:48.323135   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0912 18:42:48.346811   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 18:42:48.346879   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 18:42:48.369076   25774 provision.go:86] duration metric: configureAuth took 517.116419ms
	I0912 18:42:48.369101   25774 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:42:48.369320   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:48.369360   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:48.369693   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.372404   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.372825   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.372851   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.373017   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.373198   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.373387   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.373552   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.373737   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.374095   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.374108   25774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 18:42:48.484155   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 18:42:48.484175   25774 buildroot.go:70] root file system type: tmpfs
	I0912 18:42:48.484269   25774 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 18:42:48.484284   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.486806   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.487163   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.487199   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.487339   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.487537   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.487696   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.487860   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.487993   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.488283   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.488362   25774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.209"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 18:42:48.611547   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.209
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 18:42:48.611575   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.614223   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.614651   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.614685   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.614810   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.615012   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.615161   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.615320   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.615531   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.615880   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.615910   25774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 18:42:49.491968   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 18:42:49.491990   25774 machine.go:91] provisioned docker machine in 1.892968996s
	I0912 18:42:49.492001   25774 start.go:300] post-start starting for "multinode-348977-m02" (driver="kvm2")
	I0912 18:42:49.492011   25774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:42:49.492033   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.492389   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:42:49.492428   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.495587   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.496039   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.496074   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.496235   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.496409   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.496557   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.496709   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.580048   25774 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:42:49.584124   25774 command_runner.go:130] > NAME=Buildroot
	I0912 18:42:49.584146   25774 command_runner.go:130] > VERSION=2021.02.12-1-gaa74cea-dirty
	I0912 18:42:49.584153   25774 command_runner.go:130] > ID=buildroot
	I0912 18:42:49.584161   25774 command_runner.go:130] > VERSION_ID=2021.02.12
	I0912 18:42:49.584168   25774 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0912 18:42:49.584316   25774 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:42:49.584334   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/addons for local assets ...
	I0912 18:42:49.584409   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/files for local assets ...
	I0912 18:42:49.584509   25774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> 108482.pem in /etc/ssl/certs
	I0912 18:42:49.584523   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /etc/ssl/certs/108482.pem
	I0912 18:42:49.584635   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 18:42:49.592773   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:42:49.617623   25774 start.go:303] post-start completed in 125.608825ms
	I0912 18:42:49.617646   25774 fix.go:56] fixHost completed within 20.088031606s
	I0912 18:42:49.617665   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.620435   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.620845   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.620869   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.621069   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.621262   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.621404   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.621570   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.621758   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:49.622052   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:49.622063   25774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 18:42:49.731465   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694544169.678612481
	
	I0912 18:42:49.731485   25774 fix.go:206] guest clock: 1694544169.678612481
	I0912 18:42:49.731492   25774 fix.go:219] Guest: 2023-09-12 18:42:49.678612481 +0000 UTC Remote: 2023-09-12 18:42:49.617649209 +0000 UTC m=+83.981581209 (delta=60.963272ms)
	I0912 18:42:49.731504   25774 fix.go:190] guest clock delta is within tolerance: 60.963272ms
	I0912 18:42:49.731513   25774 start.go:83] releasing machines lock for "multinode-348977-m02", held for 20.201911405s
	I0912 18:42:49.731541   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.731783   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:49.734410   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.734890   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.734925   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.737058   25774 out.go:177] * Found network options:
	I0912 18:42:49.738484   25774 out.go:177]   - NO_PROXY=192.168.39.209
	W0912 18:42:49.739948   25774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 18:42:49.739975   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740468   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740681   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740737   25774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:42:49.740784   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	W0912 18:42:49.740858   25774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 18:42:49.740942   25774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 18:42:49.740979   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.743639   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.743671   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744084   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.744116   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744145   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.744165   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744240   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.744410   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.744416   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.744596   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.744599   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.744774   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.744769   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.744886   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.854820   25774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 18:42:49.855190   25774 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 18:42:49.855233   25774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:42:49.855293   25774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:42:49.872592   25774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0912 18:42:49.872900   25774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:42:49.872923   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:42:49.873033   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:42:49.890584   25774 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0912 18:42:49.891097   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:42:49.901256   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:42:49.911217   25774 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:42:49.911258   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:42:49.921924   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:42:49.932287   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:42:49.942216   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:42:49.952004   25774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:42:49.962020   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:42:49.971792   25774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:42:49.980297   25774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 18:42:49.980393   25774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:42:49.989544   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:50.094046   25774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:42:50.115209   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:42:50.115284   25774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 18:42:50.127032   25774 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0912 18:42:50.127923   25774 command_runner.go:130] > [Unit]
	I0912 18:42:50.127939   25774 command_runner.go:130] > Description=Docker Application Container Engine
	I0912 18:42:50.127944   25774 command_runner.go:130] > Documentation=https://docs.docker.com
	I0912 18:42:50.127950   25774 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0912 18:42:50.127955   25774 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0912 18:42:50.127962   25774 command_runner.go:130] > StartLimitBurst=3
	I0912 18:42:50.127966   25774 command_runner.go:130] > StartLimitIntervalSec=60
	I0912 18:42:50.127971   25774 command_runner.go:130] > [Service]
	I0912 18:42:50.127976   25774 command_runner.go:130] > Type=notify
	I0912 18:42:50.127985   25774 command_runner.go:130] > Restart=on-failure
	I0912 18:42:50.127996   25774 command_runner.go:130] > Environment=NO_PROXY=192.168.39.209
	I0912 18:42:50.128008   25774 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0912 18:42:50.128019   25774 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0912 18:42:50.128032   25774 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0912 18:42:50.128039   25774 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0912 18:42:50.128046   25774 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0912 18:42:50.128053   25774 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0912 18:42:50.128062   25774 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0912 18:42:50.128071   25774 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0912 18:42:50.128083   25774 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0912 18:42:50.128090   25774 command_runner.go:130] > ExecStart=
	I0912 18:42:50.128114   25774 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0912 18:42:50.128127   25774 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0912 18:42:50.128134   25774 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0912 18:42:50.128140   25774 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0912 18:42:50.128145   25774 command_runner.go:130] > LimitNOFILE=infinity
	I0912 18:42:50.128149   25774 command_runner.go:130] > LimitNPROC=infinity
	I0912 18:42:50.128154   25774 command_runner.go:130] > LimitCORE=infinity
	I0912 18:42:50.128161   25774 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0912 18:42:50.128168   25774 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0912 18:42:50.128178   25774 command_runner.go:130] > TasksMax=infinity
	I0912 18:42:50.128185   25774 command_runner.go:130] > TimeoutStartSec=0
	I0912 18:42:50.128197   25774 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0912 18:42:50.128208   25774 command_runner.go:130] > Delegate=yes
	I0912 18:42:50.128218   25774 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0912 18:42:50.128256   25774 command_runner.go:130] > KillMode=process
	I0912 18:42:50.128266   25774 command_runner.go:130] > [Install]
	I0912 18:42:50.128275   25774 command_runner.go:130] > WantedBy=multi-user.target
	I0912 18:42:50.128490   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:42:50.140278   25774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 18:42:50.156780   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:42:50.169134   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:42:50.180997   25774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:42:50.207570   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:42:50.221227   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:42:50.237822   25774 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0912 18:42:50.238226   25774 ssh_runner.go:195] Run: which cri-dockerd
	I0912 18:42:50.241514   25774 command_runner.go:130] > /usr/bin/cri-dockerd
	I0912 18:42:50.241908   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 18:42:50.250024   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 18:42:50.269261   25774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 18:42:50.375301   25774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 18:42:50.482348   25774 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 18:42:50.482378   25774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 18:42:50.499144   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:50.600957   25774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 18:42:52.035593   25774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4345964s)
	I0912 18:42:52.035674   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:42:52.134695   25774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 18:42:52.252441   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:42:52.363710   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:52.471147   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 18:42:52.484624   25774 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0912 18:42:52.486932   25774 out.go:177] 
	W0912 18:42:52.488525   25774 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0912 18:42:52.488540   25774 out.go:239] * 
	W0912 18:42:52.489285   25774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 18:42:52.491138   25774 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-12 18:41:37 UTC, ends at Tue 2023-09-12 18:42:53 UTC. --
	Sep 12 18:42:11 multinode-348977 dockerd[811]: time="2023-09-12T18:42:11.024190283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:11 multinode-348977 dockerd[811]: time="2023-09-12T18:42:11.024199800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:13 multinode-348977 cri-dockerd[1025]: time="2023-09-12T18:42:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3ec8e106fac65fe861579e38e010f99847ddca3d0f37581a183693b9bcd14d4/resolv.conf as [nameserver 192.168.122.1]"
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.447136452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.451788377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.451882752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.452030611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.061851006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.062195884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.062216451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.062282116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063143164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063368836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063418603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063433894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 cri-dockerd[1025]: time="2023-09-12T18:42:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9df497b51c1b90fe37ef8ae9f7ebb72d0151d57610fa49d4fe1fd419f9ce2ef4/resolv.conf as [nameserver 192.168.122.1]"
	Sep 12 18:42:25 multinode-348977 cri-dockerd[1025]: time="2023-09-12T18:42:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3688028545c3998c5f75e4d5e6621c4c5a0e73bbe71c9395d6387d5b29ed167d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.794695839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.794887847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.795127003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.814082052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925129152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925414025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925443070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925457752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	3d839fe423398       8c811b4aec35f                                                                                         28 seconds ago      Running             busybox                   1                   3688028545c39
	0fd05c38ac077       ead0a4a53df89                                                                                         28 seconds ago      Running             coredns                   1                   9df497b51c1b9
	c71d7c92a0630       c7d1297425461                                                                                         40 seconds ago      Running             kindnet-cni               1                   c3ec8e106fac6
	a2e119ff0bf65       6e38f40d628db                                                                                         43 seconds ago      Running             storage-provisioner       1                   25ec6eea906d3
	8983d7a34d7e1       6cdbabde3874e                                                                                         44 seconds ago      Running             kube-proxy                1                   2ab96fa55f0ef
	4b0a3970f77ff       b462ce0c8b1ff                                                                                         48 seconds ago      Running             kube-scheduler            1                   58ca84d5ee1f9
	d8d42361d6c78       821b3dfea27be                                                                                         49 seconds ago      Running             kube-controller-manager   1                   47e548c7595e8
	f7e6e4ccf8c6d       73deb9a3f7025                                                                                         49 seconds ago      Running             etcd                      1                   737cdef8c716b
	ea58445474f8a       5c801295c21d0                                                                                         49 seconds ago      Running             kube-apiserver            1                   009b38f39bf1d
	5d0e5a575e7a7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   57d559c84696d
	43aaf5c3bf6ed       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       0                   96a48d1e6808d
	012e610913534       ead0a4a53df89                                                                                         4 minutes ago       Exited              coredns                   0                   d9fcb5b501768
	5486463296b78       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Exited              kindnet-cni               0                   061d1cef513dc
	7791e737cea36       6cdbabde3874e                                                                                         4 minutes ago       Exited              kube-proxy                0                   1e31cfd643be5
	5253cfd31af01       b462ce0c8b1ff                                                                                         4 minutes ago       Exited              kube-scheduler            0                   14cac5d320ea7
	ff41c9b085ade       821b3dfea27be                                                                                         4 minutes ago       Exited              kube-controller-manager   0                   a0de152dc98db
	c0587efa38dbd       73deb9a3f7025                                                                                         4 minutes ago       Exited              etcd                      0                   e113d197f01ff
	3627cce96a103       5c801295c21d0                                                                                         4 minutes ago       Exited              kube-apiserver            0                   7fabc68ca2332
	
	* 
	* ==> coredns [012e61091353] <==
	* [INFO] 10.244.0.3:56404 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827513s
	[INFO] 10.244.0.3:39909 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058733s
	[INFO] 10.244.0.3:40512 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090095s
	[INFO] 10.244.0.3:60406 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000873796s
	[INFO] 10.244.0.3:57880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000029978s
	[INFO] 10.244.0.3:34549 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000022714s
	[INFO] 10.244.0.3:38341 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000026033s
	[INFO] 10.244.1.2:39841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228291s
	[INFO] 10.244.1.2:51650 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181493s
	[INFO] 10.244.1.2:51468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191351s
	[INFO] 10.244.1.2:42384 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131748s
	[INFO] 10.244.0.3:39782 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000285286s
	[INFO] 10.244.0.3:34979 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007029s
	[INFO] 10.244.0.3:33076 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041543s
	[INFO] 10.244.0.3:52995 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035864s
	[INFO] 10.244.1.2:51087 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160281s
	[INFO] 10.244.1.2:35395 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000230629s
	[INFO] 10.244.1.2:51952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000164424s
	[INFO] 10.244.1.2:41607 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178887s
	[INFO] 10.244.0.3:54371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158074s
	[INFO] 10.244.0.3:36708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133949s
	[INFO] 10.244.0.3:33324 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010055s
	[INFO] 10.244.0.3:60814 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006458s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [0fd05c38ac07] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47641 - 55285 "HINFO IN 6364648132792803096.1436698189111927659. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069508904s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-348977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fcf473f700c1ee60c8afd1005162a3d3f02aa75
	                    minikube.k8s.io/name=multinode-348977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T18_38_06_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 18:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348977
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 18:42:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:38:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:38:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:38:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:42:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    multinode-348977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 16345d08910a4f1386ba34dab54a7536
	  System UUID:                16345d08-910a-4f13-86ba-34dab54a7536
	  Boot ID:                    95f721a1-a2ae-4351-ac7c-daa64c367fe3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lzrq4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-5dd5756b68-bsdfd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m36s
	  kube-system                 etcd-multinode-348977                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kindnet-xs7zp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m36s
	  kube-system                 kube-apiserver-multinode-348977             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-controller-manager-multinode-348977    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-gp457                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-multinode-348977             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m34s                  kube-proxy       
	  Normal  Starting                 42s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m57s)  kubelet          Node multinode-348977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m57s)  kubelet          Node multinode-348977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m57s)  kubelet          Node multinode-348977 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m48s                  kubelet          Node multinode-348977 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m48s                  kubelet          Node multinode-348977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s                  kubelet          Node multinode-348977 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m37s                  node-controller  Node multinode-348977 event: Registered Node multinode-348977 in Controller
	  Normal  NodeReady                4m24s                  kubelet          Node multinode-348977 status is now: NodeReady
	  Normal  Starting                 50s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)      kubelet          Node multinode-348977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)      kubelet          Node multinode-348977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)      kubelet          Node multinode-348977 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-348977 event: Registered Node multinode-348977 in Controller
	
	
	Name:               multinode-348977-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348977-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 18:39:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348977-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 18:40:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-348977-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e87658c5a254c91b54a24e8fcd285d9
	  System UUID:                3e87658c-5a25-4c91-b54a-24e8fcd285d9
	  Boot ID:                    d5722b21-541e-4bbb-875e-ed8e4fe5010b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-k9v4h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-rzmdg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m48s
	  kube-system                 kube-proxy-2wfpr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m48s (x5 over 3m50s)  kubelet          Node multinode-348977-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x5 over 3m50s)  kubelet          Node multinode-348977-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x5 over 3m50s)  kubelet          Node multinode-348977-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m47s                  node-controller  Node multinode-348977-m02 event: Registered Node multinode-348977-m02 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node multinode-348977-m02 status is now: NodeReady
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-348977-m02 event: Registered Node multinode-348977-m02 in Controller
	
	
	Name:               multinode-348977-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348977-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 18:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348977-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 18:40:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 18:40:54 +0000   Tue, 12 Sep 2023 18:40:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 18:40:54 +0000   Tue, 12 Sep 2023 18:40:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 18:40:54 +0000   Tue, 12 Sep 2023 18:40:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 18:40:54 +0000   Tue, 12 Sep 2023 18:40:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    multinode-348977-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0add64ef746445c18b49cf5267c4a795
	  System UUID:                0add64ef-7464-45c1-8b49-cf5267c4a795
	  Boot ID:                    d80a8cd0-89b5-411b-8aaf-f95e559356cc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vw7cg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-fvnqz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  Starting                 2m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m55s)  kubelet          Node multinode-348977-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m55s)  kubelet          Node multinode-348977-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m55s)  kubelet          Node multinode-348977-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                2m42s                  kubelet          Node multinode-348977-m03 status is now: NodeReady
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x2 over 2m7s)    kubelet          Node multinode-348977-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x2 over 2m7s)    kubelet          Node multinode-348977-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x2 over 2m7s)    kubelet          Node multinode-348977-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                119s                   kubelet          Node multinode-348977-m03 status is now: NodeReady
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-348977-m03 event: Registered Node multinode-348977-m03 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep12 18:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070750] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.292632] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.291624] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133732] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.499881] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.983408] systemd-fstab-generator[496]: Ignoring "noauto" for root device
	[  +0.109705] systemd-fstab-generator[507]: Ignoring "noauto" for root device
	[  +1.130579] systemd-fstab-generator[734]: Ignoring "noauto" for root device
	[  +0.275077] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +0.107373] systemd-fstab-generator[783]: Ignoring "noauto" for root device
	[  +0.118134] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +1.545086] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.114032] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.102407] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.102958] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.126032] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[Sep12 18:42] systemd-fstab-generator[1261]: Ignoring "noauto" for root device
	[  +0.393432] kauditd_printk_skb: 67 callbacks suppressed
	[ +17.604251] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [c0587efa38db] <==
	* {"level":"info","ts":"2023-09-12T18:37:59.755156Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T18:37:59.755342Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T18:37:59.756647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T18:37:59.764436Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T18:37:59.767946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T18:39:03.737137Z","caller":"traceutil/trace.go:171","msg":"trace[1521857685] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"173.2868ms","start":"2023-09-12T18:39:03.563812Z","end":"2023-09-12T18:39:03.737099Z","steps":["trace[1521857685] 'process raft request'  (duration: 110.524525ms)","trace[1521857685] 'compare'  (duration: 62.397452ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:39:04.080004Z","caller":"traceutil/trace.go:171","msg":"trace[1993585711] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"336.233237ms","start":"2023-09-12T18:39:03.743757Z","end":"2023-09-12T18:39:04.07999Z","steps":["trace[1993585711] 'process raft request'  (duration: 272.765002ms)","trace[1993585711] 'compare'  (duration: 63.297167ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T18:39:04.081198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-12T18:39:03.743742Z","time spent":"336.779834ms","remote":"127.0.0.1:53564","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2350,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-7q85d\" mod_revision:479 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-7q85d\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-7q85d\" > >"}
	{"level":"info","ts":"2023-09-12T18:39:04.08013Z","caller":"traceutil/trace.go:171","msg":"trace[852958403] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"260.097444ms","start":"2023-09-12T18:39:03.820021Z","end":"2023-09-12T18:39:04.080119Z","steps":["trace[852958403] 'process raft request'  (duration: 259.900086ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:39:59.246506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.898009ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6276762833927192222 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-348977-m03.17843ac9faa7f269\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-348977-m03.17843ac9faa7f269\" value_size:642 lease:6276762833927191776 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-09-12T18:39:59.246917Z","caller":"traceutil/trace.go:171","msg":"trace[1350052669] linearizableReadLoop","detail":"{readStateIndex:649; appliedIndex:646; }","duration":"219.29457ms","start":"2023-09-12T18:39:59.027585Z","end":"2023-09-12T18:39:59.24688Z","steps":["trace[1350052669] 'read index received'  (duration: 21.629106ms)","trace[1350052669] 'applied index is now lower than readState.Index'  (duration: 197.66468ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:39:59.247057Z","caller":"traceutil/trace.go:171","msg":"trace[1628836798] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"270.154541ms","start":"2023-09-12T18:39:58.976886Z","end":"2023-09-12T18:39:59.247041Z","steps":["trace[1628836798] 'process raft request'  (duration: 72.318815ms)","trace[1628836798] 'compare'  (duration: 196.668625ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:39:59.247334Z","caller":"traceutil/trace.go:171","msg":"trace[1411681711] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"223.684554ms","start":"2023-09-12T18:39:59.02364Z","end":"2023-09-12T18:39:59.247324Z","steps":["trace[1411681711] 'process raft request'  (duration: 223.155933ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:39:59.250707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.171347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-348977-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-12T18:39:59.251217Z","caller":"traceutil/trace.go:171","msg":"trace[1151974734] range","detail":"{range_begin:/registry/csinodes/multinode-348977-m03; range_end:; response_count:0; response_revision:612; }","duration":"223.687646ms","start":"2023-09-12T18:39:59.027508Z","end":"2023-09-12T18:39:59.251196Z","steps":["trace[1151974734] 'agreement among raft nodes before linearized reading'  (duration: 223.094864ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-12T18:40:58.411708Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-12T18:40:58.411837Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-348977","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2023-09-12T18:40:58.412086Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T18:40:58.412209Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T18:40:58.456342Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T18:40:58.456474Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-12T18:40:58.456764Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2023-09-12T18:40:58.460525Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:40:58.460668Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:40:58.460688Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-348977","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	* 
	* ==> etcd [f7e6e4ccf8c6] <==
	* {"level":"info","ts":"2023-09-12T18:42:05.634482Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2023-09-12T18:42:05.63468Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T18:42:05.634749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T18:42:05.639015Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-12T18:42:05.639265Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T18:42:05.639317Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-12T18:42:05.639717Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T18:42:05.639901Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T18:42:05.639909Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T18:42:05.640234Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:42:05.640271Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:42:06.608994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-12T18:42:06.609059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-12T18:42:06.609075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2023-09-12T18:42:06.609087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.609092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.609186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.609196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.611463Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:multinode-348977 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T18:42:06.611714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T18:42:06.613046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T18:42:06.613138Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T18:42:06.61399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T18:42:06.614123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T18:42:06.614836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	
	* 
	* ==> kernel <==
	*  18:42:53 up 1 min,  0 users,  load average: 0.48, 0.18, 0.06
	Linux multinode-348977 5.10.57 #1 SMP Thu Sep 7 15:04:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [5486463296b7] <==
	* I0912 18:40:16.108139       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:16.108176       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.2.0/24] 
	I0912 18:40:26.179114       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:26.179155       1 main.go:227] handling current node
	I0912 18:40:26.179168       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:26.179183       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:26.179528       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:26.179545       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.2.0/24] 
	I0912 18:40:36.192286       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:36.192332       1 main.go:227] handling current node
	I0912 18:40:36.192342       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:36.192348       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:36.192561       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:36.192570       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.2.0/24] 
	I0912 18:40:46.206291       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:46.206722       1 main.go:227] handling current node
	I0912 18:40:46.206890       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:46.206975       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:56.221569       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:56.221992       1 main.go:227] handling current node
	I0912 18:40:56.222238       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:56.222363       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:56.222810       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:56.222867       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:40:56.223015       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.76 Flags: [] Table: 0} 
	
	* 
	* ==> kindnet [c71d7c92a063] <==
	* I0912 18:42:14.547011       1 main.go:227] handling current node
	I0912 18:42:14.547143       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:14.547179       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:14.547302       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.55 Flags: [] Table: 0} 
	I0912 18:42:14.547386       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:14.547420       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:42:14.547468       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.76 Flags: [] Table: 0} 
	I0912 18:42:24.560514       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:24.560568       1 main.go:227] handling current node
	I0912 18:42:24.560584       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:24.560590       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:24.560688       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:24.560692       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:42:34.572368       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:34.572426       1 main.go:227] handling current node
	I0912 18:42:34.572443       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:34.572450       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:34.572739       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:34.572749       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:42:44.585177       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:44.585239       1 main.go:227] handling current node
	I0912 18:42:44.585255       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:44.585264       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:44.595162       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:44.595221       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [3627cce96a10] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 18:41:08.216377       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 18:41:08.304609       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 18:41:08.374873       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [ea58445474f8] <==
	* I0912 18:42:08.024194       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0912 18:42:08.024731       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0912 18:42:08.024860       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0912 18:42:08.086794       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 18:42:08.125110       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0912 18:42:08.125874       1 aggregator.go:166] initial CRD sync complete...
	I0912 18:42:08.125917       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 18:42:08.125979       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 18:42:08.125987       1 cache.go:39] Caches are synced for autoregister controller
	I0912 18:42:08.165078       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 18:42:08.165739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 18:42:08.168667       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0912 18:42:08.170197       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0912 18:42:08.170238       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0912 18:42:08.171808       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0912 18:42:08.172547       1 shared_informer.go:318] Caches are synced for configmaps
	I0912 18:42:08.991751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0912 18:42:09.405255       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.209]
	I0912 18:42:09.407042       1 controller.go:624] quota admission added evaluator for: endpoints
	I0912 18:42:09.413668       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 18:42:11.068168       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0912 18:42:11.342916       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 18:42:11.354669       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 18:42:11.433098       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 18:42:11.443027       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [d8d42361d6c7] <==
	* I0912 18:42:20.628229       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0912 18:42:20.628530       1 shared_informer.go:318] Caches are synced for PV protection
	I0912 18:42:20.629915       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0912 18:42:20.631450       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0912 18:42:20.632435       1 shared_informer.go:318] Caches are synced for persistent volume
	I0912 18:42:20.632450       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0912 18:42:20.636531       1 shared_informer.go:318] Caches are synced for ephemeral
	I0912 18:42:20.707628       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0912 18:42:20.719689       1 shared_informer.go:318] Caches are synced for disruption
	I0912 18:42:20.729750       1 shared_informer.go:318] Caches are synced for deployment
	I0912 18:42:20.735143       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 18:42:20.783145       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0912 18:42:20.802339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.506268ms"
	I0912 18:42:20.802460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.515µs"
	I0912 18:42:20.804054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.54866ms"
	I0912 18:42:20.805308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.671µs"
	I0912 18:42:20.817385       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 18:42:21.179187       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 18:42:21.190568       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 18:42:21.190666       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0912 18:42:26.981846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.908539ms"
	I0912 18:42:26.982359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="259.719µs"
	I0912 18:42:27.008429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.092µs"
	I0912 18:42:27.046176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.943866ms"
	I0912 18:42:27.054742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="374.084µs"
	
	* 
	* ==> kube-controller-manager [ff41c9b085ad] <==
	* I0912 18:39:20.441280       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:39:23.006486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0912 18:39:23.025222       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-k9v4h"
	I0912 18:39:23.044852       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lzrq4"
	I0912 18:39:23.064705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.136161ms"
	I0912 18:39:23.087255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.383929ms"
	I0912 18:39:23.113908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.573207ms"
	I0912 18:39:23.114329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.008µs"
	I0912 18:39:24.882805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.533382ms"
	I0912 18:39:24.883305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.901µs"
	I0912 18:39:25.065646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.568569ms"
	I0912 18:39:25.065989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.382µs"
	I0912 18:39:59.249887       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348977-m03\" does not exist"
	I0912 18:39:59.249943       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:39:59.291166       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vw7cg"
	I0912 18:39:59.301329       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-348977-m03" podCIDRs=["10.244.2.0/24"]
	I0912 18:39:59.301879       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fvnqz"
	I0912 18:40:01.820031       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-348977-m03"
	I0912 18:40:01.820299       1 event.go:307] "Event occurred" object="multinode-348977-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-348977-m03 event: Registered Node multinode-348977-m03 in Controller"
	I0912 18:40:11.668310       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:40:45.930947       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:40:46.792815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348977-m03\" does not exist"
	I0912 18:40:46.793262       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:40:46.803362       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-348977-m03" podCIDRs=["10.244.3.0/24"]
	I0912 18:40:55.004052       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	
	* 
	* ==> kube-proxy [7791e737cea3] <==
	* I0912 18:38:18.983165       1 server_others.go:69] "Using iptables proxy"
	I0912 18:38:19.001200       1 node.go:141] Successfully retrieved node IP: 192.168.39.209
	I0912 18:38:19.068176       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 18:38:19.068229       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 18:38:19.071127       1 server_others.go:152] "Using iptables Proxier"
	I0912 18:38:19.071197       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 18:38:19.071594       1 server.go:846] "Version info" version="v1.28.1"
	I0912 18:38:19.071606       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:38:19.072515       1 config.go:188] "Starting service config controller"
	I0912 18:38:19.072564       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 18:38:19.072591       1 config.go:97] "Starting endpoint slice config controller"
	I0912 18:38:19.072595       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 18:38:19.073105       1 config.go:315] "Starting node config controller"
	I0912 18:38:19.073145       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 18:38:19.174208       1 shared_informer.go:318] Caches are synced for service config
	I0912 18:38:19.174214       1 shared_informer.go:318] Caches are synced for node config
	I0912 18:38:19.174326       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [8983d7a34d7e] <==
	* I0912 18:42:10.810452       1 server_others.go:69] "Using iptables proxy"
	I0912 18:42:10.831289       1 node.go:141] Successfully retrieved node IP: 192.168.39.209
	I0912 18:42:10.920576       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 18:42:10.920631       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 18:42:10.923798       1 server_others.go:152] "Using iptables Proxier"
	I0912 18:42:10.923877       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 18:42:10.926195       1 server.go:846] "Version info" version="v1.28.1"
	I0912 18:42:10.926239       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:42:10.927551       1 config.go:188] "Starting service config controller"
	I0912 18:42:10.927760       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 18:42:10.927788       1 config.go:97] "Starting endpoint slice config controller"
	I0912 18:42:10.927793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 18:42:10.929530       1 config.go:315] "Starting node config controller"
	I0912 18:42:10.929541       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 18:42:11.029917       1 shared_informer.go:318] Caches are synced for node config
	I0912 18:42:11.030034       1 shared_informer.go:318] Caches are synced for service config
	I0912 18:42:11.030058       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [4b0a3970f77f] <==
	* I0912 18:42:06.171781       1 serving.go:348] Generated self-signed cert in-memory
	W0912 18:42:08.058711       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 18:42:08.058759       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 18:42:08.058777       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 18:42:08.058783       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 18:42:08.097758       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0912 18:42:08.097814       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:42:08.101187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 18:42:08.101345       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0912 18:42:08.101692       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 18:42:08.102315       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0912 18:42:08.202244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [5253cfd31af0] <==
	* W0912 18:38:02.049647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 18:38:02.049757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 18:38:02.049872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 18:38:02.049883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 18:38:02.057213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 18:38:02.057263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 18:38:02.894816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 18:38:02.894869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 18:38:02.918705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 18:38:02.918805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 18:38:03.008018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 18:38:03.008070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 18:38:03.013545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 18:38:03.013592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 18:38:03.055628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 18:38:03.055654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 18:38:03.091601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 18:38:03.091889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0912 18:38:03.315538       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 18:38:03.315564       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0912 18:38:06.105760       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 18:40:58.389016       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0912 18:40:58.389137       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0912 18:40:58.390065       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0912 18:40:58.390365       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-12 18:41:37 UTC, ends at Tue 2023-09-12 18:42:54 UTC. --
	Sep 12 18:42:09 multinode-348977 kubelet[1267]: I0912 18:42:09.823198    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab96fa55f0ef28f704c9d1745b1c48a4be93c094ea1cf741253b69536744b49"
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.709075    1267 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711261    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume podName:b14b1b22-9cc1-44da-bab6-32ec6c417f9a nodeName:}" failed. No retries permitted until 2023-09-12 18:42:12.711142949 +0000 UTC m=+9.960675589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume") pod "coredns-5dd5756b68-bsdfd" (UID: "b14b1b22-9cc1-44da-bab6-32ec6c417f9a") : object "kube-system"/"coredns" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711708    1267 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711731    1267 projected.go:198] Error preparing data for projected volume kube-api-access-fth6t for pod default/busybox-5bc68d56bd-lzrq4: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711773    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t podName:e821b198-38ff-4455-9acb-74f6774ee805 nodeName:}" failed. No retries permitted until 2023-09-12 18:42:12.71176182 +0000 UTC m=+9.961294459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fth6t" (UniqueName: "kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t") pod "busybox-5bc68d56bd-lzrq4" (UID: "e821b198-38ff-4455-9acb-74f6774ee805") : object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727304    1267 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727362    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume podName:b14b1b22-9cc1-44da-bab6-32ec6c417f9a nodeName:}" failed. No retries permitted until 2023-09-12 18:42:16.727348407 +0000 UTC m=+13.976881032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume") pod "coredns-5dd5756b68-bsdfd" (UID: "b14b1b22-9cc1-44da-bab6-32ec6c417f9a") : object "kube-system"/"coredns" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727689    1267 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727709    1267 projected.go:198] Error preparing data for projected volume kube-api-access-fth6t for pod default/busybox-5bc68d56bd-lzrq4: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727755    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t podName:e821b198-38ff-4455-9acb-74f6774ee805 nodeName:}" failed. No retries permitted until 2023-09-12 18:42:16.727741796 +0000 UTC m=+13.977274421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fth6t" (UniqueName: "kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t") pod "busybox-5bc68d56bd-lzrq4" (UID: "e821b198-38ff-4455-9acb-74f6774ee805") : object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: I0912 18:42:13.310253    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ec8e106fac65fe861579e38e010f99847ddca3d0f37581a183693b9bcd14d4"
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: I0912 18:42:13.348176    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ec6eea906d3f346783e33b09260ff2422a9e5aa9b4883c67c9173939765553"
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: E0912 18:42:13.348197    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-bsdfd" podUID="b14b1b22-9cc1-44da-bab6-32ec6c417f9a"
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: E0912 18:42:13.351638    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-lzrq4" podUID="e821b198-38ff-4455-9acb-74f6774ee805"
	Sep 12 18:42:15 multinode-348977 kubelet[1267]: E0912 18:42:15.083415    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-bsdfd" podUID="b14b1b22-9cc1-44da-bab6-32ec6c417f9a"
	Sep 12 18:42:15 multinode-348977 kubelet[1267]: E0912 18:42:15.083593    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-lzrq4" podUID="e821b198-38ff-4455-9acb-74f6774ee805"
	Sep 12 18:42:15 multinode-348977 kubelet[1267]: I0912 18:42:15.420450    1267 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.762853    1267 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.763020    1267 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.763745    1267 projected.go:198] Error preparing data for projected volume kube-api-access-fth6t for pod default/busybox-5bc68d56bd-lzrq4: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.763650    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume podName:b14b1b22-9cc1-44da-bab6-32ec6c417f9a nodeName:}" failed. No retries permitted until 2023-09-12 18:42:24.763630726 +0000 UTC m=+22.013163364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume") pod "coredns-5dd5756b68-bsdfd" (UID: "b14b1b22-9cc1-44da-bab6-32ec6c417f9a") : object "kube-system"/"coredns" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.764171    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t podName:e821b198-38ff-4455-9acb-74f6774ee805 nodeName:}" failed. No retries permitted until 2023-09-12 18:42:24.764155206 +0000 UTC m=+22.013687834 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fth6t" (UniqueName: "kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t") pod "busybox-5bc68d56bd-lzrq4" (UID: "e821b198-38ff-4455-9acb-74f6774ee805") : object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:25 multinode-348977 kubelet[1267]: I0912 18:42:25.769672    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3688028545c3998c5f75e4d5e6621c4c5a0e73bbe71c9395d6387d5b29ed167d"
	Sep 12 18:42:25 multinode-348977 kubelet[1267]: I0912 18:42:25.936560    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df497b51c1b90fe37ef8ae9f7ebb72d0151d57610fa49d4fe1fd419f9ce2ef4"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-348977 -n multinode-348977
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-348977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (117.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 node delete m03
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr: exit status 2 (410.22072ms)

                                                
                                                
-- stdout --
	multinode-348977
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-348977-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:42:55.498443   26335 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:42:55.498565   26335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:42:55.498574   26335 out.go:309] Setting ErrFile to fd 2...
	I0912 18:42:55.498578   26335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:42:55.498799   26335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:42:55.498975   26335 out.go:303] Setting JSON to false
	I0912 18:42:55.499010   26335 mustload.go:65] Loading cluster: multinode-348977
	I0912 18:42:55.499111   26335 notify.go:220] Checking for updates...
	I0912 18:42:55.499515   26335 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:55.499537   26335 status.go:255] checking status of multinode-348977 ...
	I0912 18:42:55.499940   26335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:55.499993   26335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:55.521323   26335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0912 18:42:55.521762   26335 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:55.522275   26335 main.go:141] libmachine: Using API Version  1
	I0912 18:42:55.522302   26335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:55.522643   26335 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:55.522823   26335 main.go:141] libmachine: (multinode-348977) Calling .GetState
	I0912 18:42:55.524513   26335 status.go:330] multinode-348977 host status = "Running" (err=<nil>)
	I0912 18:42:55.524529   26335 host.go:66] Checking if "multinode-348977" exists ...
	I0912 18:42:55.524801   26335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:55.524842   26335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:55.539278   26335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0912 18:42:55.539657   26335 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:55.540067   26335 main.go:141] libmachine: Using API Version  1
	I0912 18:42:55.540088   26335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:55.540428   26335 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:55.540628   26335 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:42:55.543326   26335 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:42:55.543780   26335 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:42:55.543833   26335 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:42:55.543939   26335 host.go:66] Checking if "multinode-348977" exists ...
	I0912 18:42:55.544363   26335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:55.544414   26335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:55.558681   26335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0912 18:42:55.559151   26335 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:55.559691   26335 main.go:141] libmachine: Using API Version  1
	I0912 18:42:55.559716   26335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:55.560096   26335 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:55.560309   26335 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:42:55.560495   26335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 18:42:55.560541   26335 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:42:55.563354   26335 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:42:55.563830   26335 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:42:55.563862   26335 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:42:55.563996   26335 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:42:55.564162   26335 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:42:55.564340   26335 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:42:55.564469   26335 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:42:55.654294   26335 ssh_runner.go:195] Run: systemctl --version
	I0912 18:42:55.660294   26335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:55.679460   26335 kubeconfig.go:92] found "multinode-348977" server: "https://192.168.39.209:8443"
	I0912 18:42:55.679490   26335 api_server.go:166] Checking apiserver status ...
	I0912 18:42:55.679520   26335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:55.692923   26335 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1613/cgroup
	I0912 18:42:55.701860   26335 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod4abe28b137e1ba2381404609e97bb3f7/ea58445474f8a1705008dff4d5491c0cd0e36e9180ece090203924884ce91f4a"
	I0912 18:42:55.701913   26335 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4abe28b137e1ba2381404609e97bb3f7/ea58445474f8a1705008dff4d5491c0cd0e36e9180ece090203924884ce91f4a/freezer.state
	I0912 18:42:55.713404   26335 api_server.go:204] freezer state: "THAWED"
	I0912 18:42:55.713436   26335 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:55.719416   26335 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:55.719433   26335 status.go:421] multinode-348977 apiserver status = Running (err=<nil>)
	I0912 18:42:55.719441   26335 status.go:257] multinode-348977 status: &{Name:multinode-348977 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:42:55.719461   26335 status.go:255] checking status of multinode-348977-m02 ...
	I0912 18:42:55.719747   26335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:55.719783   26335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:55.734287   26335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0912 18:42:55.734699   26335 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:55.735144   26335 main.go:141] libmachine: Using API Version  1
	I0912 18:42:55.735171   26335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:55.735475   26335 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:55.735636   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .GetState
	I0912 18:42:55.737079   26335 status.go:330] multinode-348977-m02 host status = "Running" (err=<nil>)
	I0912 18:42:55.737101   26335 host.go:66] Checking if "multinode-348977-m02" exists ...
	I0912 18:42:55.737374   26335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:55.737405   26335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:55.751426   26335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0912 18:42:55.751764   26335 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:55.752185   26335 main.go:141] libmachine: Using API Version  1
	I0912 18:42:55.752205   26335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:55.752451   26335 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:55.752636   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:55.755492   26335 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:55.755834   26335 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:55.755865   26335 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:55.756002   26335 host.go:66] Checking if "multinode-348977-m02" exists ...
	I0912 18:42:55.756283   26335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:55.756334   26335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:55.770392   26335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0912 18:42:55.770801   26335 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:55.771181   26335 main.go:141] libmachine: Using API Version  1
	I0912 18:42:55.771201   26335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:55.771498   26335 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:55.771680   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:55.771898   26335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 18:42:55.771919   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:55.774334   26335 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:55.774724   26335 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:55.774756   26335 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:55.774882   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:55.775058   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:55.775207   26335 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:55.775340   26335 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:55.858064   26335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:55.870759   26335 status.go:257] multinode-348977-m02 status: &{Name:multinode-348977-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-348977 -n multinode-348977
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-348977 logs -n 25: (1.178482021s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977:/home/docker/cp-test_multinode-348977-m02_multinode-348977.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977 sudo cat                                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m02_multinode-348977.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03:/home/docker/cp-test_multinode-348977-m02_multinode-348977-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977-m03 sudo cat                                   | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m02_multinode-348977-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp testdata/cp-test.txt                                                | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977:/home/docker/cp-test_multinode-348977-m03_multinode-348977.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977 sudo cat                                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m03_multinode-348977.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt                       | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m02:/home/docker/cp-test_multinode-348977-m03_multinode-348977-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n                                                                 | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | multinode-348977-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-348977 ssh -n multinode-348977-m02 sudo cat                                   | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | /home/docker/cp-test_multinode-348977-m03_multinode-348977-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-348977 node stop m03                                                          | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	| node    | multinode-348977 node start                                                             | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:40 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-348977                                                                | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC |                     |
	| stop    | -p multinode-348977                                                                     | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:40 UTC | 12 Sep 23 18:41 UTC |
	| start   | -p multinode-348977                                                                     | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:41 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-348977                                                                | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:42 UTC |                     |
	| node    | multinode-348977 node delete                                                            | multinode-348977 | jenkins | v1.31.2 | 12 Sep 23 18:42 UTC | 12 Sep 23 18:42 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:41:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:41:25.667613   25774 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:41:25.667734   25774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:41:25.667744   25774 out.go:309] Setting ErrFile to fd 2...
	I0912 18:41:25.667751   25774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:41:25.667992   25774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:41:25.668537   25774 out.go:303] Setting JSON to false
	I0912 18:41:25.675371   25774 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1436,"bootTime":1694542650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:41:25.675445   25774 start.go:138] virtualization: kvm guest
	I0912 18:41:25.677679   25774 out.go:177] * [multinode-348977] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:41:25.679064   25774 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:41:25.679068   25774 notify.go:220] Checking for updates...
	I0912 18:41:25.680532   25774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:41:25.681821   25774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:25.683123   25774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:41:25.684315   25774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:41:25.685748   25774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:41:25.687862   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:41:25.687948   25774 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:41:25.688376   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:41:25.688437   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:41:25.702321   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0912 18:41:25.702721   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:41:25.703222   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:41:25.703249   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:41:25.703593   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:41:25.703770   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:25.738031   25774 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 18:41:25.739353   25774 start.go:298] selected driver: kvm2
	I0912 18:41:25.739367   25774 start.go:902] validating driver "kvm2" against &{Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:41:25.739535   25774 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:41:25.739863   25774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:41:25.739952   25774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3674/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:41:25.754342   25774 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:41:25.755022   25774 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 18:41:25.755067   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:41:25.755081   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:41:25.755090   25774 start_flags.go:321] config:
	{Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0912 18:41:25.755284   25774 iso.go:125] acquiring lock: {Name:mk43b7bcf1553c61ec6315fe7159639653246bdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:41:25.757119   25774 out.go:177] * Starting control plane node multinode-348977 in cluster multinode-348977
	I0912 18:41:25.758385   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:41:25.758412   25774 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0912 18:41:25.758419   25774 cache.go:57] Caching tarball of preloaded images
	I0912 18:41:25.758521   25774 preload.go:174] Found /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:41:25.758535   25774 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:41:25.758693   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:41:25.758878   25774 start.go:365] acquiring machines lock for multinode-348977: {Name:mkb814e9f5e9709f943ea910e0cc7d91215dc74f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:41:25.758921   25774 start.go:369] acquired machines lock for "multinode-348977" in 23.43µs
	I0912 18:41:25.758937   25774 start.go:96] Skipping create...Using existing machine configuration
	I0912 18:41:25.758946   25774 fix.go:54] fixHost starting: 
	I0912 18:41:25.759194   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:41:25.759230   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:41:25.772820   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0912 18:41:25.773260   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:41:25.773721   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:41:25.773744   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:41:25.774050   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:41:25.774207   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:25.774351   25774 main.go:141] libmachine: (multinode-348977) Calling .GetState
	I0912 18:41:25.776006   25774 fix.go:102] recreateIfNeeded on multinode-348977: state=Stopped err=<nil>
	I0912 18:41:25.776027   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	W0912 18:41:25.776184   25774 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 18:41:25.778169   25774 out.go:177] * Restarting existing kvm2 VM for "multinode-348977" ...
	I0912 18:41:25.779423   25774 main.go:141] libmachine: (multinode-348977) Calling .Start
	I0912 18:41:25.779594   25774 main.go:141] libmachine: (multinode-348977) Ensuring networks are active...
	I0912 18:41:25.780345   25774 main.go:141] libmachine: (multinode-348977) Ensuring network default is active
	I0912 18:41:25.780685   25774 main.go:141] libmachine: (multinode-348977) Ensuring network mk-multinode-348977 is active
	I0912 18:41:25.780989   25774 main.go:141] libmachine: (multinode-348977) Getting domain xml...
	I0912 18:41:25.781706   25774 main.go:141] libmachine: (multinode-348977) Creating domain...
	I0912 18:41:26.979765   25774 main.go:141] libmachine: (multinode-348977) Waiting to get IP...
	I0912 18:41:26.980558   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:26.980870   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:26.980946   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:26.980855   25804 retry.go:31] will retry after 279.689815ms: waiting for machine to come up
	I0912 18:41:27.262432   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.262870   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.262898   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.262825   25804 retry.go:31] will retry after 258.456262ms: waiting for machine to come up
	I0912 18:41:27.523376   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.523770   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.523792   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.523714   25804 retry.go:31] will retry after 470.938004ms: waiting for machine to come up
	I0912 18:41:27.996320   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:27.996767   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:27.996795   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:27.996720   25804 retry.go:31] will retry after 597.246886ms: waiting for machine to come up
	I0912 18:41:28.595108   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:28.595555   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:28.595588   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:28.595492   25804 retry.go:31] will retry after 568.569691ms: waiting for machine to come up
	I0912 18:41:29.165136   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:29.165526   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:29.165568   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:29.165431   25804 retry.go:31] will retry after 758.578505ms: waiting for machine to come up
	I0912 18:41:29.925242   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:29.925603   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:29.925635   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:29.925546   25804 retry.go:31] will retry after 859.704183ms: waiting for machine to come up
	I0912 18:41:30.786642   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:30.786967   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:30.787004   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:30.786922   25804 retry.go:31] will retry after 1.183485789s: waiting for machine to come up
	I0912 18:41:31.972095   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:31.972538   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:31.972559   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:31.972512   25804 retry.go:31] will retry after 1.429607271s: waiting for machine to come up
	I0912 18:41:33.403618   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:33.403985   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:33.404016   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:33.403933   25804 retry.go:31] will retry after 1.93373353s: waiting for machine to come up
	I0912 18:41:35.340062   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:35.340437   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:35.340468   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:35.340385   25804 retry.go:31] will retry after 2.736938727s: waiting for machine to come up
	I0912 18:41:38.080033   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:38.080374   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:38.080419   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:38.080335   25804 retry.go:31] will retry after 3.047877472s: waiting for machine to come up
	I0912 18:41:41.129305   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:41.129731   25774 main.go:141] libmachine: (multinode-348977) DBG | unable to find current IP address of domain multinode-348977 in network mk-multinode-348977
	I0912 18:41:41.129764   25774 main.go:141] libmachine: (multinode-348977) DBG | I0912 18:41:41.129706   25804 retry.go:31] will retry after 4.362757487s: waiting for machine to come up
	I0912 18:41:45.497217   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.497673   25774 main.go:141] libmachine: (multinode-348977) Found IP for machine: 192.168.39.209
	I0912 18:41:45.497700   25774 main.go:141] libmachine: (multinode-348977) Reserving static IP address...
	I0912 18:41:45.497715   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has current primary IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.498115   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "multinode-348977", mac: "52:54:00:38:2d:65", ip: "192.168.39.209"} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.498148   25774 main.go:141] libmachine: (multinode-348977) DBG | skip adding static IP to network mk-multinode-348977 - found existing host DHCP lease matching {name: "multinode-348977", mac: "52:54:00:38:2d:65", ip: "192.168.39.209"}
	I0912 18:41:45.498157   25774 main.go:141] libmachine: (multinode-348977) Reserved static IP address: 192.168.39.209
	I0912 18:41:45.498169   25774 main.go:141] libmachine: (multinode-348977) Waiting for SSH to be available...
	I0912 18:41:45.498196   25774 main.go:141] libmachine: (multinode-348977) DBG | Getting to WaitForSSH function...
	I0912 18:41:45.500347   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.500695   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.500725   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.500838   25774 main.go:141] libmachine: (multinode-348977) DBG | Using SSH client type: external
	I0912 18:41:45.500863   25774 main.go:141] libmachine: (multinode-348977) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa (-rw-------)
	I0912 18:41:45.500903   25774 main.go:141] libmachine: (multinode-348977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:41:45.500923   25774 main.go:141] libmachine: (multinode-348977) DBG | About to run SSH command:
	I0912 18:41:45.500939   25774 main.go:141] libmachine: (multinode-348977) DBG | exit 0
	I0912 18:41:45.586419   25774 main.go:141] libmachine: (multinode-348977) DBG | SSH cmd err, output: <nil>: 
	I0912 18:41:45.586834   25774 main.go:141] libmachine: (multinode-348977) Calling .GetConfigRaw
	I0912 18:41:45.587556   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:45.589868   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.590371   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.590417   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.590668   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:41:45.590853   25774 machine.go:88] provisioning docker machine ...
	I0912 18:41:45.590870   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:45.591068   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.591255   25774 buildroot.go:166] provisioning hostname "multinode-348977"
	I0912 18:41:45.591275   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.591470   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.593702   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.594074   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.594103   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.594218   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.594383   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.594509   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.594632   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.594781   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:45.595274   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:45.595295   25774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348977 && echo "multinode-348977" | sudo tee /etc/hostname
	I0912 18:41:45.722017   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348977
	
	I0912 18:41:45.722046   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.724726   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.725094   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.725130   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.725251   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.725458   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.725619   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.725761   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.725916   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:45.726274   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:45.726292   25774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348977/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:41:45.842816   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:41:45.842841   25774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3674/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3674/.minikube}
	I0912 18:41:45.842857   25774 buildroot.go:174] setting up certificates
	I0912 18:41:45.842865   25774 provision.go:83] configureAuth start
	I0912 18:41:45.842874   25774 main.go:141] libmachine: (multinode-348977) Calling .GetMachineName
	I0912 18:41:45.843162   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:45.845880   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.846268   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.846304   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.846423   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.848394   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.848724   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.848757   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.848868   25774 provision.go:138] copyHostCerts
	I0912 18:41:45.848897   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:41:45.848925   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem, removing ...
	I0912 18:41:45.848930   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:41:45.848994   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem (1078 bytes)
	I0912 18:41:45.849111   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:41:45.849132   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem, removing ...
	I0912 18:41:45.849136   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:41:45.849173   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem (1123 bytes)
	I0912 18:41:45.849235   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:41:45.849258   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem, removing ...
	I0912 18:41:45.849267   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:41:45.849293   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem (1675 bytes)
	I0912 18:41:45.849363   25774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem org=jenkins.multinode-348977 san=[192.168.39.209 192.168.39.209 localhost 127.0.0.1 minikube multinode-348977]
	I0912 18:41:45.937349   25774 provision.go:172] copyRemoteCerts
	I0912 18:41:45.937398   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:41:45.937443   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:45.940144   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.940452   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:45.940478   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:45.940646   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:45.940826   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:45.941012   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:45.941161   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:46.028317   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 18:41:46.028387   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 18:41:46.051259   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 18:41:46.051345   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 18:41:46.073514   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 18:41:46.073587   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 18:41:46.094769   25774 provision.go:86] duration metric: configureAuth took 251.89397ms
	I0912 18:41:46.094791   25774 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:41:46.095009   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:41:46.095035   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:46.095303   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.097707   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.098061   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.098087   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.098202   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.098375   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.098520   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.098678   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.098851   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.099151   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.099166   25774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 18:41:46.212162   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 18:41:46.212182   25774 buildroot.go:70] root file system type: tmpfs
	I0912 18:41:46.212298   25774 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 18:41:46.212318   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.214891   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.215233   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.215263   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.215455   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.215642   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.215791   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.215920   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.216075   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.216522   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.216627   25774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 18:41:46.339328   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 18:41:46.339371   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:46.341974   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.342333   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:46.342373   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:46.342551   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:46.342746   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.342899   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:46.343025   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:46.343217   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:46.343656   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:46.343688   25774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 18:41:47.205623   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 18:41:47.205652   25774 machine.go:91] provisioned docker machine in 1.61478511s
	I0912 18:41:47.205663   25774 start.go:300] post-start starting for "multinode-348977" (driver="kvm2")
	I0912 18:41:47.205676   25774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:41:47.205694   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.205995   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:41:47.206022   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.208743   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.209079   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.209103   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.209248   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.209422   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.209594   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.209743   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.295876   25774 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:41:47.299689   25774 command_runner.go:130] > NAME=Buildroot
	I0912 18:41:47.299703   25774 command_runner.go:130] > VERSION=2021.02.12-1-gaa74cea-dirty
	I0912 18:41:47.299708   25774 command_runner.go:130] > ID=buildroot
	I0912 18:41:47.299713   25774 command_runner.go:130] > VERSION_ID=2021.02.12
	I0912 18:41:47.299717   25774 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0912 18:41:47.299906   25774 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:41:47.299927   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/addons for local assets ...
	I0912 18:41:47.299995   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/files for local assets ...
	I0912 18:41:47.300083   25774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> 108482.pem in /etc/ssl/certs
	I0912 18:41:47.300095   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /etc/ssl/certs/108482.pem
	I0912 18:41:47.300182   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 18:41:47.307891   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:41:47.330135   25774 start.go:303] post-start completed in 124.459565ms
	I0912 18:41:47.330151   25774 fix.go:56] fixHost completed within 21.57120518s
	I0912 18:41:47.330168   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.332212   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.332586   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.332620   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.332750   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.332956   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.333101   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.333254   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.333426   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:41:47.333724   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0912 18:41:47.333735   25774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 18:41:47.443376   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694544107.413556881
	
	I0912 18:41:47.443399   25774 fix.go:206] guest clock: 1694544107.413556881
	I0912 18:41:47.443409   25774 fix.go:219] Guest: 2023-09-12 18:41:47.413556881 +0000 UTC Remote: 2023-09-12 18:41:47.330154345 +0000 UTC m=+21.694086344 (delta=83.402536ms)
	I0912 18:41:47.443449   25774 fix.go:190] guest clock delta is within tolerance: 83.402536ms
	I0912 18:41:47.443457   25774 start.go:83] releasing machines lock for "multinode-348977", held for 21.684524313s
	I0912 18:41:47.443482   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.443730   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:47.446097   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.446567   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.446617   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.446750   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447397   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447575   25774 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:41:47.447653   25774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:41:47.447692   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.447780   25774 ssh_runner.go:195] Run: cat /version.json
	I0912 18:41:47.447796   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:41:47.450306   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450547   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450692   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.450723   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.450860   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.451014   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:47.451041   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.451051   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:47.451131   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:41:47.451226   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.451300   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:41:47.451366   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.451417   25774 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:41:47.451543   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:41:47.556478   25774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 18:41:47.556535   25774 command_runner.go:130] > {"iso_version": "v1.31.0-1694081706-17207", "kicbase_version": "v0.0.40-1693218425-17145", "minikube_version": "v1.31.2", "commit": "1e9174da326b681d7488cd5fad4145a637e5f218"}
	I0912 18:41:47.556664   25774 ssh_runner.go:195] Run: systemctl --version
	I0912 18:41:47.562789   25774 command_runner.go:130] > systemd 247 (247)
	I0912 18:41:47.562819   25774 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0912 18:41:47.563174   25774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 18:41:47.568635   25774 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 18:41:47.568673   25774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:41:47.568744   25774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:41:47.583069   25774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0912 18:41:47.583097   25774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:41:47.583106   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:41:47.583215   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:41:47.600326   25774 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0912 18:41:47.600503   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:41:47.610473   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:41:47.620204   25774 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:41:47.620270   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:41:47.629827   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:41:47.639220   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:41:47.648528   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:41:47.657904   25774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:41:47.668248   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:41:47.678277   25774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:41:47.686527   25774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 18:41:47.686608   25774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:41:47.695327   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:47.797158   25774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:41:47.812584   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:41:47.812657   25774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 18:41:47.826911   25774 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0912 18:41:47.826933   25774 command_runner.go:130] > [Unit]
	I0912 18:41:47.826944   25774 command_runner.go:130] > Description=Docker Application Container Engine
	I0912 18:41:47.826952   25774 command_runner.go:130] > Documentation=https://docs.docker.com
	I0912 18:41:47.826960   25774 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0912 18:41:47.826969   25774 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0912 18:41:47.826978   25774 command_runner.go:130] > StartLimitBurst=3
	I0912 18:41:47.826986   25774 command_runner.go:130] > StartLimitIntervalSec=60
	I0912 18:41:47.826998   25774 command_runner.go:130] > [Service]
	I0912 18:41:47.827006   25774 command_runner.go:130] > Type=notify
	I0912 18:41:47.827015   25774 command_runner.go:130] > Restart=on-failure
	I0912 18:41:47.827032   25774 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0912 18:41:47.827055   25774 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0912 18:41:47.827069   25774 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0912 18:41:47.827082   25774 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0912 18:41:47.827095   25774 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0912 18:41:47.827109   25774 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0912 18:41:47.827127   25774 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0912 18:41:47.827144   25774 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0912 18:41:47.827160   25774 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0912 18:41:47.827169   25774 command_runner.go:130] > ExecStart=
	I0912 18:41:47.827195   25774 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0912 18:41:47.827210   25774 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0912 18:41:47.827222   25774 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0912 18:41:47.827234   25774 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0912 18:41:47.827246   25774 command_runner.go:130] > LimitNOFILE=infinity
	I0912 18:41:47.827266   25774 command_runner.go:130] > LimitNPROC=infinity
	I0912 18:41:47.827278   25774 command_runner.go:130] > LimitCORE=infinity
	I0912 18:41:47.827289   25774 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0912 18:41:47.827299   25774 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0912 18:41:47.827311   25774 command_runner.go:130] > TasksMax=infinity
	I0912 18:41:47.827322   25774 command_runner.go:130] > TimeoutStartSec=0
	I0912 18:41:47.827336   25774 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0912 18:41:47.827346   25774 command_runner.go:130] > Delegate=yes
	I0912 18:41:47.827356   25774 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0912 18:41:47.827363   25774 command_runner.go:130] > KillMode=process
	I0912 18:41:47.827369   25774 command_runner.go:130] > [Install]
	I0912 18:41:47.827385   25774 command_runner.go:130] > WantedBy=multi-user.target
	I0912 18:41:47.827455   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:41:47.849101   25774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 18:41:47.865230   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:41:47.877782   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:41:47.890445   25774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:41:47.918932   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:41:47.930773   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:41:47.947116   25774 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0912 18:41:47.947200   25774 ssh_runner.go:195] Run: which cri-dockerd
	I0912 18:41:47.950521   25774 command_runner.go:130] > /usr/bin/cri-dockerd
	I0912 18:41:47.950648   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 18:41:47.958320   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 18:41:47.973919   25774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 18:41:48.073799   25774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 18:41:48.184968   25774 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 18:41:48.185002   25774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 18:41:48.201823   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:48.299993   25774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 18:41:49.744586   25774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.444560977s)
	I0912 18:41:49.744655   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:41:49.846098   25774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 18:41:49.958418   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:41:50.061865   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:50.173855   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 18:41:50.189825   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:41:50.290635   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0912 18:41:50.371946   25774 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 18:41:50.372017   25774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 18:41:50.377756   25774 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0912 18:41:50.377774   25774 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 18:41:50.377781   25774 command_runner.go:130] > Device: 16h/22d	Inode: 849         Links: 1
	I0912 18:41:50.377800   25774 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0912 18:41:50.377809   25774 command_runner.go:130] > Access: 2023-09-12 18:41:50.278241513 +0000
	I0912 18:41:50.377817   25774 command_runner.go:130] > Modify: 2023-09-12 18:41:50.278241513 +0000
	I0912 18:41:50.377825   25774 command_runner.go:130] > Change: 2023-09-12 18:41:50.281246019 +0000
	I0912 18:41:50.377831   25774 command_runner.go:130] >  Birth: -
	I0912 18:41:50.377939   25774 start.go:537] Will wait 60s for crictl version
	I0912 18:41:50.377991   25774 ssh_runner.go:195] Run: which crictl
	I0912 18:41:50.381510   25774 command_runner.go:130] > /usr/bin/crictl
	I0912 18:41:50.381786   25774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 18:41:50.424429   25774 command_runner.go:130] > Version:  0.1.0
	I0912 18:41:50.424452   25774 command_runner.go:130] > RuntimeName:  docker
	I0912 18:41:50.424460   25774 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0912 18:41:50.424466   25774 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0912 18:41:50.424724   25774 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0912 18:41:50.424789   25774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 18:41:50.450690   25774 command_runner.go:130] > 24.0.6
	I0912 18:41:50.450956   25774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 18:41:50.476349   25774 command_runner.go:130] > 24.0.6
	I0912 18:41:50.479420   25774 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0912 18:41:50.479460   25774 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:41:50.482063   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:50.482385   25774 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:41:37 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:41:50.482420   25774 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:41:50.482563   25774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 18:41:50.486347   25774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:41:50.497696   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:41:50.497741   25774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 18:41:50.517010   25774 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0912 18:41:50.517029   25774 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 18:41:50.517037   25774 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0912 18:41:50.517049   25774 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0912 18:41:50.517055   25774 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0912 18:41:50.517062   25774 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0912 18:41:50.517072   25774 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0912 18:41:50.517083   25774 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0912 18:41:50.517094   25774 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:41:50.517105   25774 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0912 18:41:50.517185   25774 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0912 18:41:50.517206   25774 docker.go:566] Images already preloaded, skipping extraction
	I0912 18:41:50.517258   25774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 18:41:50.536618   25774 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0912 18:41:50.536638   25774 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0912 18:41:50.536646   25774 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0912 18:41:50.536655   25774 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 18:41:50.536663   25774 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0912 18:41:50.536670   25774 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0912 18:41:50.536682   25774 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0912 18:41:50.536688   25774 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0912 18:41:50.536697   25774 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:41:50.536704   25774 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0912 18:41:50.536730   25774 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0912 18:41:50.536746   25774 cache_images.go:84] Images are preloaded, skipping loading
	I0912 18:41:50.536845   25774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 18:41:50.562638   25774 command_runner.go:130] > cgroupfs
	I0912 18:41:50.562909   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:41:50.562928   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:41:50.562949   25774 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 18:41:50.562976   25774 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-348977 NodeName:multinode-348977 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 18:41:50.563124   25774 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-348977"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 18:41:50.563209   25774 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-348977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 18:41:50.563267   25774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 18:41:50.572176   25774 command_runner.go:130] > kubeadm
	I0912 18:41:50.572195   25774 command_runner.go:130] > kubectl
	I0912 18:41:50.572202   25774 command_runner.go:130] > kubelet
	I0912 18:41:50.572227   25774 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 18:41:50.572284   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 18:41:50.580040   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0912 18:41:50.596061   25774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 18:41:50.611346   25774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0912 18:41:50.627820   25774 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0912 18:41:50.631401   25774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:41:50.643788   25774 certs.go:56] Setting up /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977 for IP: 192.168.39.209
	I0912 18:41:50.643818   25774 certs.go:190] acquiring lock for shared ca certs: {Name:mk2421757d3f1bd81d42ecb091844bc5771a96da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:41:50.643980   25774 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key
	I0912 18:41:50.644020   25774 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key
	I0912 18:41:50.644084   25774 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key
	I0912 18:41:50.644164   25774 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key.c475731a
	I0912 18:41:50.644203   25774 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key
	I0912 18:41:50.644214   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 18:41:50.644226   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 18:41:50.644237   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 18:41:50.644251   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 18:41:50.644263   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 18:41:50.644276   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 18:41:50.644288   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 18:41:50.644299   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 18:41:50.644353   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem (1338 bytes)
	W0912 18:41:50.644381   25774 certs.go:433] ignoring /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848_empty.pem, impossibly tiny 0 bytes
	I0912 18:41:50.644391   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 18:41:50.644411   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem (1078 bytes)
	I0912 18:41:50.644433   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem (1123 bytes)
	I0912 18:41:50.644454   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem (1675 bytes)
	I0912 18:41:50.644488   25774 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:41:50.644515   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.644528   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem -> /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.644540   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.645043   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 18:41:50.669387   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 18:41:50.693124   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 18:41:50.717344   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 18:41:50.741383   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 18:41:50.765260   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 18:41:50.788458   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 18:41:50.812054   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 18:41:50.834458   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 18:41:50.856384   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/10848.pem --> /usr/share/ca-certificates/10848.pem (1338 bytes)
	I0912 18:41:50.879015   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /usr/share/ca-certificates/108482.pem (1708 bytes)
	I0912 18:41:50.901340   25774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 18:41:50.916988   25774 ssh_runner.go:195] Run: openssl version
	I0912 18:41:50.922215   25774 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0912 18:41:50.922272   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 18:41:50.931174   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935308   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935555   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.935591   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:41:50.940768   25774 command_runner.go:130] > b5213941
	I0912 18:41:50.940828   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 18:41:50.949933   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10848.pem && ln -fs /usr/share/ca-certificates/10848.pem /etc/ssl/certs/10848.pem"
	I0912 18:41:50.959171   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963298   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 18:25 /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963387   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 18:25 /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.963424   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10848.pem
	I0912 18:41:50.968525   25774 command_runner.go:130] > 51391683
	I0912 18:41:50.968577   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10848.pem /etc/ssl/certs/51391683.0"
	I0912 18:41:50.977420   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108482.pem && ln -fs /usr/share/ca-certificates/108482.pem /etc/ssl/certs/108482.pem"
	I0912 18:41:50.986499   25774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990623   25774 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 18:25 /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990805   25774 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 18:25 /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.990843   25774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108482.pem
	I0912 18:41:50.996022   25774 command_runner.go:130] > 3ec20f2e
	I0912 18:41:50.996068   25774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/108482.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 18:41:51.005086   25774 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 18:41:51.009100   25774 command_runner.go:130] > ca.crt
	I0912 18:41:51.009117   25774 command_runner.go:130] > ca.key
	I0912 18:41:51.009124   25774 command_runner.go:130] > healthcheck-client.crt
	I0912 18:41:51.009131   25774 command_runner.go:130] > healthcheck-client.key
	I0912 18:41:51.009139   25774 command_runner.go:130] > peer.crt
	I0912 18:41:51.009144   25774 command_runner.go:130] > peer.key
	I0912 18:41:51.009151   25774 command_runner.go:130] > server.crt
	I0912 18:41:51.009160   25774 command_runner.go:130] > server.key
	I0912 18:41:51.009343   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 18:41:51.014870   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.014914   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 18:41:51.020073   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.020297   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 18:41:51.025479   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.025531   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 18:41:51.030928   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.030980   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 18:41:51.036454   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.036501   25774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 18:41:51.041830   25774 command_runner.go:130] > Certificate will not expire
	I0912 18:41:51.041883   25774 kubeadm.go:404] StartCluster: {Name:multinode-348977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-348977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:41:51.041992   25774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 18:41:51.060635   25774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 18:41:51.069462   25774 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0912 18:41:51.069486   25774 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0912 18:41:51.069493   25774 command_runner.go:130] > /var/lib/minikube/etcd:
	I0912 18:41:51.069497   25774 command_runner.go:130] > member
	I0912 18:41:51.069726   25774 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0912 18:41:51.069759   25774 kubeadm.go:636] restartCluster start
	I0912 18:41:51.069810   25774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 18:41:51.077935   25774 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.078333   25774 kubeconfig.go:135] verify returned: extract IP: "multinode-348977" does not appear in /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:51.078445   25774 kubeconfig.go:146] "multinode-348977" context is missing from /home/jenkins/minikube-integration/17233-3674/kubeconfig - will repair!
	I0912 18:41:51.078733   25774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3674/kubeconfig: {Name:mked094375583bdbe55c31d17add6f22f93c8430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:41:51.079119   25774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:41:51.079379   25774 kapi.go:59] client config for multinode-348977: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key", CAFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 18:41:51.079954   25774 cert_rotation.go:137] Starting client certificate rotation controller
	I0912 18:41:51.080075   25774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 18:41:51.088002   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.088038   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.098117   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.098131   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.098157   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.109116   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:51.609873   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:51.610041   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:51.622441   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:52.110086   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:52.110176   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:52.121246   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:52.609915   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:52.609995   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:52.621769   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:53.109288   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:53.109378   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:53.120512   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:53.610143   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:53.610216   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:53.621426   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:54.110052   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:54.110138   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:54.121104   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:54.609667   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:54.609758   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:54.621600   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:55.110219   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:55.110305   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:55.121464   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:55.610086   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:55.610163   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:55.623327   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:56.109204   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:56.109279   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:56.120664   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:56.609222   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:56.609302   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:56.620802   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:57.109386   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:57.109488   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:57.120886   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:57.609416   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:57.609490   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:57.621348   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:58.109961   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:58.110033   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:58.121759   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:58.609284   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:58.609358   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:58.620494   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:59.110173   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:59.110270   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:59.121513   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:41:59.610149   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:41:59.610234   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:41:59.621495   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:00.110117   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:42:00.110197   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:42:00.121328   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:00.609937   25774 api_server.go:166] Checking apiserver status ...
	I0912 18:42:00.610014   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 18:42:00.621204   25774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 18:42:01.088910   25774 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0912 18:42:01.088951   25774 kubeadm.go:1128] stopping kube-system containers ...
	I0912 18:42:01.089019   25774 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 18:42:01.110509   25774 command_runner.go:130] > 43aaf5c3bf6e
	I0912 18:42:01.110528   25774 command_runner.go:130] > 012e61091353
	I0912 18:42:01.110532   25774 command_runner.go:130] > 96a48d1e6808
	I0912 18:42:01.110536   25774 command_runner.go:130] > d9fcb5b50176
	I0912 18:42:01.110541   25774 command_runner.go:130] > 5486463296b7
	I0912 18:42:01.110545   25774 command_runner.go:130] > 7791e737cea3
	I0912 18:42:01.110548   25774 command_runner.go:130] > 1e31cfd643be
	I0912 18:42:01.110552   25774 command_runner.go:130] > 061d1cef513d
	I0912 18:42:01.110557   25774 command_runner.go:130] > 5253cfd31af0
	I0912 18:42:01.110564   25774 command_runner.go:130] > ff41c9b085ad
	I0912 18:42:01.110569   25774 command_runner.go:130] > c0587efa38db
	I0912 18:42:01.110575   25774 command_runner.go:130] > 3627cce96a10
	I0912 18:42:01.110581   25774 command_runner.go:130] > 14cac5d320ea
	I0912 18:42:01.110587   25774 command_runner.go:130] > a0de152dc98d
	I0912 18:42:01.110602   25774 command_runner.go:130] > 7fabc68ca233
	I0912 18:42:01.110615   25774 command_runner.go:130] > e113d197f01f
	I0912 18:42:01.110643   25774 docker.go:462] Stopping containers: [43aaf5c3bf6e 012e61091353 96a48d1e6808 d9fcb5b50176 5486463296b7 7791e737cea3 1e31cfd643be 061d1cef513d 5253cfd31af0 ff41c9b085ad c0587efa38db 3627cce96a10 14cac5d320ea a0de152dc98d 7fabc68ca233 e113d197f01f]
	I0912 18:42:01.110731   25774 ssh_runner.go:195] Run: docker stop 43aaf5c3bf6e 012e61091353 96a48d1e6808 d9fcb5b50176 5486463296b7 7791e737cea3 1e31cfd643be 061d1cef513d 5253cfd31af0 ff41c9b085ad c0587efa38db 3627cce96a10 14cac5d320ea a0de152dc98d 7fabc68ca233 e113d197f01f
	I0912 18:42:01.135827   25774 command_runner.go:130] > 43aaf5c3bf6e
	I0912 18:42:01.135850   25774 command_runner.go:130] > 012e61091353
	I0912 18:42:01.135857   25774 command_runner.go:130] > 96a48d1e6808
	I0912 18:42:01.135864   25774 command_runner.go:130] > d9fcb5b50176
	I0912 18:42:01.135871   25774 command_runner.go:130] > 5486463296b7
	I0912 18:42:01.135876   25774 command_runner.go:130] > 7791e737cea3
	I0912 18:42:01.135880   25774 command_runner.go:130] > 1e31cfd643be
	I0912 18:42:01.135883   25774 command_runner.go:130] > 061d1cef513d
	I0912 18:42:01.135898   25774 command_runner.go:130] > 5253cfd31af0
	I0912 18:42:01.135905   25774 command_runner.go:130] > ff41c9b085ad
	I0912 18:42:01.135914   25774 command_runner.go:130] > c0587efa38db
	I0912 18:42:01.135928   25774 command_runner.go:130] > 3627cce96a10
	I0912 18:42:01.135941   25774 command_runner.go:130] > 14cac5d320ea
	I0912 18:42:01.135948   25774 command_runner.go:130] > a0de152dc98d
	I0912 18:42:01.135954   25774 command_runner.go:130] > 7fabc68ca233
	I0912 18:42:01.135961   25774 command_runner.go:130] > e113d197f01f
	I0912 18:42:01.137164   25774 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 18:42:01.153212   25774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 18:42:01.162065   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0912 18:42:01.162089   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0912 18:42:01.162097   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0912 18:42:01.162104   25774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:42:01.162134   25774 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:42:01.162182   25774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 18:42:01.171349   25774 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0912 18:42:01.171377   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:01.289145   25774 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 18:42:01.289913   25774 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0912 18:42:01.290490   25774 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0912 18:42:01.291144   25774 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 18:42:01.292064   25774 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0912 18:42:01.292637   25774 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0912 18:42:01.293503   25774 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0912 18:42:01.294172   25774 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0912 18:42:01.294833   25774 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0912 18:42:01.295368   25774 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 18:42:01.296031   25774 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 18:42:01.297818   25774 command_runner.go:130] > [certs] Using the existing "sa" key
	I0912 18:42:01.298323   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.545462   25774 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 18:42:02.545494   25774 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 18:42:02.545504   25774 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 18:42:02.545512   25774 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 18:42:02.545520   25774 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 18:42:02.545606   25774 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2472468s)
	I0912 18:42:02.545640   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.733506   25774 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 18:42:02.733549   25774 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 18:42:02.733559   25774 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0912 18:42:02.733582   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.804619   25774 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 18:42:02.804641   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 18:42:02.809567   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 18:42:02.811026   25774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 18:42:02.816366   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:02.884578   25774 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 18:42:02.888100   25774 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:42:02.888191   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:02.904156   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:03.418205   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:03.918314   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:04.418376   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:04.917697   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:05.011521   25774 command_runner.go:130] > 1613
	I0912 18:42:05.012117   25774 api_server.go:72] duration metric: took 2.124014474s to wait for apiserver process to appear ...
	I0912 18:42:05.012146   25774 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:42:05.012167   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:05.012754   25774 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I0912 18:42:05.012783   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:05.013807   25774 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I0912 18:42:05.514175   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.050114   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 18:42:08.050147   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 18:42:08.050161   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.082430   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 18:42:08.082459   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 18:42:08.513959   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:08.519156   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 18:42:08.519185   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 18:42:09.014802   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:09.019756   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 18:42:09.019792   25774 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 18:42:09.514302   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:09.520638   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:09.520736   25774 round_trippers.go:463] GET https://192.168.39.209:8443/version
	I0912 18:42:09.520751   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:09.520764   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:09.520778   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:09.528622   25774 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 18:42:09.528643   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:09.528652   25774 round_trippers.go:580]     Audit-Id: d7171996-093f-43cf-b1f6-28902f5d151b
	I0912 18:42:09.528659   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:09.528666   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:09.528674   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:09.528683   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:09.528692   25774 round_trippers.go:580]     Content-Length: 263
	I0912 18:42:09.528702   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:09 GMT
	I0912 18:42:09.528733   25774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 18:42:09.528825   25774 api_server.go:141] control plane version: v1.28.1
	I0912 18:42:09.528843   25774 api_server.go:131] duration metric: took 4.516689082s to wait for apiserver health ...
	I0912 18:42:09.528854   25774 cni.go:84] Creating CNI manager for ""
	I0912 18:42:09.528863   25774 cni.go:136] 3 nodes found, recommending kindnet
	I0912 18:42:09.530468   25774 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 18:42:09.531871   25774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 18:42:09.537270   25774 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0912 18:42:09.537291   25774 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0912 18:42:09.537299   25774 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0912 18:42:09.537309   25774 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 18:42:09.537318   25774 command_runner.go:130] > Access: 2023-09-12 18:41:39.002927530 +0000
	I0912 18:42:09.537330   25774 command_runner.go:130] > Modify: 2023-09-07 15:52:17.000000000 +0000
	I0912 18:42:09.537339   25774 command_runner.go:130] > Change: 2023-09-12 18:41:36.512921513 +0000
	I0912 18:42:09.537346   25774 command_runner.go:130] >  Birth: -
	I0912 18:42:09.537468   25774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 18:42:09.537490   25774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 18:42:09.570387   25774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 18:42:11.089548   25774 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0912 18:42:11.089572   25774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0912 18:42:11.089578   25774 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0912 18:42:11.089583   25774 command_runner.go:130] > daemonset.apps/kindnet configured
	I0912 18:42:11.089601   25774 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.519192244s)
	I0912 18:42:11.089624   25774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:42:11.089698   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:11.089707   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.089714   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.089720   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.094477   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.094497   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.094506   25774 round_trippers.go:580]     Audit-Id: 28e1b4f7-27a4-4728-9259-012beb5aa7e7
	I0912 18:42:11.094513   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.094519   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.094529   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.094536   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.094545   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.097562   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84576 chars]
	I0912 18:42:11.101281   25774 system_pods.go:59] 12 kube-system pods found
	I0912 18:42:11.101307   25774 system_pods.go:61] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 18:42:11.101315   25774 system_pods.go:61] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 18:42:11.101319   25774 system_pods.go:61] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:11.101324   25774 system_pods.go:61] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:11.101329   25774 system_pods.go:61] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0912 18:42:11.101335   25774 system_pods.go:61] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 18:42:11.101344   25774 system_pods.go:61] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 18:42:11.101349   25774 system_pods.go:61] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:11.101354   25774 system_pods.go:61] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:11.101359   25774 system_pods.go:61] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 18:42:11.101365   25774 system_pods.go:61] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 18:42:11.101374   25774 system_pods.go:61] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 18:42:11.101381   25774 system_pods.go:74] duration metric: took 11.751351ms to wait for pod list to return data ...
	I0912 18:42:11.101392   25774 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:42:11.101439   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:11.101446   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.101454   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.101459   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.105805   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.105819   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.105827   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.105841   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.105847   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.105852   25774 round_trippers.go:580]     Audit-Id: ee581856-dce8-447b-8358-f37a47339ad8
	I0912 18:42:11.105857   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.105862   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.106297   25774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13670 chars]
	I0912 18:42:11.106975   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.106994   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107003   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.107007   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107011   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:11.107014   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:11.107018   25774 node_conditions.go:105] duration metric: took 5.622718ms to run NodePressure ...
	I0912 18:42:11.107031   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 18:42:11.464902   25774 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0912 18:42:11.464923   25774 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0912 18:42:11.464949   25774 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0912 18:42:11.465044   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0912 18:42:11.465058   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.465069   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.465075   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.467829   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.467850   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.467860   25774 round_trippers.go:580]     Audit-Id: 78f00d5a-7eb8-4a9e-b90d-d323283aff0d
	I0912 18:42:11.467868   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.467874   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.467879   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.467884   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.467890   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.468461   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"776"},"items":[{"metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"762","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I0912 18:42:11.469818   25774 kubeadm.go:787] kubelet initialised
	I0912 18:42:11.469838   25774 kubeadm.go:788] duration metric: took 4.877378ms waiting for restarted kubelet to initialise ...
	I0912 18:42:11.469845   25774 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:11.469907   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:11.469918   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.469928   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.469935   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.473358   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:11.473371   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.473376   25774 round_trippers.go:580]     Audit-Id: 1e9316d5-4fa8-4920-a611-94538e5de9d2
	I0912 18:42:11.473382   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.473390   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.473395   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.473400   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.473405   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.475084   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"776"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84576 chars]
	I0912 18:42:11.477518   25774 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.477580   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:11.477588   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.477595   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.477600   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.480258   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.480275   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.480281   25774 round_trippers.go:580]     Audit-Id: 26d11606-643c-4680-9fdd-7c6079a0b9d0
	I0912 18:42:11.480287   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.480292   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.480297   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.480302   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.480307   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.480652   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:11.481023   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.481034   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.481040   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.481046   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.483036   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.483082   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.483091   25774 round_trippers.go:580]     Audit-Id: e8a416f7-552b-4b46-8961-5851964b96f3
	I0912 18:42:11.483096   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.483102   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.483111   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.483120   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.483142   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.483383   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.483638   25774 pod_ready.go:97] node "multinode-348977" hosting pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.483651   25774 pod_ready.go:81] duration metric: took 6.116525ms waiting for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.483658   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.483665   25774 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.483707   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348977
	I0912 18:42:11.483714   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.483721   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.483726   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.485560   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.485573   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.485578   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.485583   25774 round_trippers.go:580]     Audit-Id: 4142a2e4-b055-4e71-a505-4b1655dfe4ed
	I0912 18:42:11.485588   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.485593   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.485598   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.485604   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.485740   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"762","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I0912 18:42:11.486046   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.486056   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.486063   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.486069   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.487681   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.487700   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.487708   25774 round_trippers.go:580]     Audit-Id: 183de2f6-83fd-4668-8968-150418f82b3c
	I0912 18:42:11.487716   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.487725   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.487731   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.487739   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.487744   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.488007   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.488347   25774 pod_ready.go:97] node "multinode-348977" hosting pod "etcd-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.488365   25774 pod_ready.go:81] duration metric: took 4.694293ms waiting for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.488375   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "etcd-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.488396   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.488451   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348977
	I0912 18:42:11.488461   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.488472   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.488485   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.490841   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.490854   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.490860   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.490865   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.490870   25774 round_trippers.go:580]     Audit-Id: d2933280-f99b-440c-a008-24b2a483ce04
	I0912 18:42:11.490875   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.490880   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.490885   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.491841   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348977","namespace":"kube-system","uid":"f540dfd0-b1d9-4e3f-b9ab-f02db770e920","resourceVersion":"763","creationTimestamp":"2023-09-12T18:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.209:8443","kubernetes.io/config.hash":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.mirror":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.seen":"2023-09-12T18:38:05.461231178Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I0912 18:42:11.492324   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.492342   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.492359   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.492373   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.494392   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.494403   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.494409   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.494414   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.494419   25774 round_trippers.go:580]     Audit-Id: 2b3e9b5c-ec6e-47ef-8eb4-42325dd1cadd
	I0912 18:42:11.494425   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.494430   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.494435   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.494702   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.495039   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-apiserver-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.495057   25774 pod_ready.go:81] duration metric: took 6.649671ms waiting for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.495064   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-apiserver-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.495070   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.495114   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348977
	I0912 18:42:11.495121   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.495127   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.495134   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.496898   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:11.496911   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.496927   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.496938   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.496951   25774 round_trippers.go:580]     Audit-Id: 0f5e0750-0fcd-4f41-abcd-d523c6aae03a
	I0912 18:42:11.496960   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.496973   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.496986   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.497810   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348977","namespace":"kube-system","uid":"930d0357-f21e-4a4e-8c3b-2cff3263568f","resourceVersion":"764","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.mirror":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.seen":"2023-09-12T18:37:56.784236763Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I0912 18:42:11.498183   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:11.498196   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.498203   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.498209   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.500292   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:11.500307   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.500316   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.500326   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.500336   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.500351   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.500361   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.500373   25774 round_trippers.go:580]     Audit-Id: d118dc25-8eab-43dd-a453-09eea91ee36a
	I0912 18:42:11.500556   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:11.500842   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-controller-manager-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.500855   25774 pod_ready.go:81] duration metric: took 5.775247ms waiting for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:11.500863   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-controller-manager-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:11.500880   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.690301   25774 request.go:629] Waited for 189.366968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:11.690363   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:11.690369   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.690379   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.690387   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.694882   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:11.694902   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.694909   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.694914   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.694919   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.694925   25774 round_trippers.go:580]     Audit-Id: 96fbd2df-71d0-42f2-9668-9a4751b3b372
	I0912 18:42:11.694930   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.694943   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.695466   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wfpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"774a14f5-3c1d-4a3b-a265-290361f0fbe3","resourceVersion":"515","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0912 18:42:11.890223   25774 request.go:629] Waited for 194.343656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:11.890300   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:11.890309   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:11.890325   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:11.890341   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:11.893961   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:11.893979   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:11.893985   25774 round_trippers.go:580]     Audit-Id: d58ddf1c-05d0-4d76-9b86-e75d4563c79f
	I0912 18:42:11.893991   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:11.893996   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:11.894003   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:11.894011   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:11.894022   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:11 GMT
	I0912 18:42:11.894278   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m02","uid":"0a11e94b-756b-4c81-9734-627ddcc38b98","resourceVersion":"581","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3266 chars]
	I0912 18:42:11.894534   25774 pod_ready.go:92] pod "kube-proxy-2wfpr" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:11.894547   25774 pod_ready.go:81] duration metric: took 393.659737ms waiting for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:11.894556   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.089913   25774 request.go:629] Waited for 195.278265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:12.089988   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:12.089997   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.090007   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.090021   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.092533   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.092550   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.092557   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.092563   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.092568   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.092573   25774 round_trippers.go:580]     Audit-Id: ab50bf62-fff6-4392-8010-8f7ac978ac19
	I0912 18:42:12.092578   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.092591   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.092750   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvnqz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d610f9be-c231-4aae-9870-e627ce41bf23","resourceVersion":"736","creationTimestamp":"2023-09-12T18:39:59Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0912 18:42:12.290497   25774 request.go:629] Waited for 197.357363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:12.290566   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:12.290571   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.290578   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.290608   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.293062   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.293078   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.293084   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.293089   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.293094   25774 round_trippers.go:580]     Audit-Id: e4b3f466-3c77-4c24-b13b-af89b75e0355
	I0912 18:42:12.293099   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.293104   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.293108   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.293215   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m03","uid":"03d033eb-43a1-4b37-a2a0-6de70662f3e7","resourceVersion":"753","creationTimestamp":"2023-09-12T18:40:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:40:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0912 18:42:12.293445   25774 pod_ready.go:92] pod "kube-proxy-fvnqz" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:12.293456   25774 pod_ready.go:81] duration metric: took 398.886453ms waiting for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.293465   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.489818   25774 request.go:629] Waited for 196.284343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:12.489872   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:12.489876   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.489884   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.489890   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.492417   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.492438   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.492447   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.492457   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.492465   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.492474   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.492483   25774 round_trippers.go:580]     Audit-Id: 8e58b70d-eb64-4ad7-8f40-d0b9d1828c0c
	I0912 18:42:12.492488   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.493218   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gp457","generateName":"kube-proxy-","namespace":"kube-system","uid":"39d70e08-cba7-4545-a6eb-a2e9152458dc","resourceVersion":"769","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I0912 18:42:12.689962   25774 request.go:629] Waited for 196.319367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:12.690069   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:12.690082   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.690089   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.690095   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.692930   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.692946   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.692952   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.692957   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.692963   25774 round_trippers.go:580]     Audit-Id: 65cc805b-a9c2-4b93-b29d-314f12cbeece
	I0912 18:42:12.692970   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.692978   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.692991   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.693385   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:12.693887   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-proxy-gp457" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:12.693914   25774 pod_ready.go:81] duration metric: took 400.443481ms waiting for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:12.693926   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-proxy-gp457" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:12.693942   25774 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:12.890377   25774 request.go:629] Waited for 196.363771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:12.890460   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:12.890467   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:12.890477   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:12.890486   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:12.893424   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:12.893446   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:12.893456   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:12.893471   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:12.893483   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:12 GMT
	I0912 18:42:12.893490   25774 round_trippers.go:580]     Audit-Id: d470dad8-0297-4d0d-a80e-ac6f86679c42
	I0912 18:42:12.893497   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:12.893505   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:12.893673   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348977","namespace":"kube-system","uid":"69ef187d-8c5d-4b26-861e-4a2178c309e7","resourceVersion":"765","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.mirror":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.seen":"2023-09-12T18:37:56.784237754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I0912 18:42:13.090457   25774 request.go:629] Waited for 196.397433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.090511   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.090515   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.090523   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.090532   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.093408   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.093432   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.093443   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.093452   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.093461   25774 round_trippers.go:580]     Audit-Id: 867db361-341c-4413-be9c-31e1e7cc54ab
	I0912 18:42:13.093470   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.093482   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.093490   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.093840   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.094121   25774 pod_ready.go:97] node "multinode-348977" hosting pod "kube-scheduler-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:13.094136   25774 pod_ready.go:81] duration metric: took 400.181932ms waiting for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	E0912 18:42:13.094144   25774 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348977" hosting pod "kube-scheduler-multinode-348977" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348977" has status "Ready":"False"
	I0912 18:42:13.094153   25774 pod_ready.go:38] duration metric: took 1.62429968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:13.094171   25774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 18:42:13.109771   25774 command_runner.go:130] > -16
	I0912 18:42:13.109820   25774 ops.go:34] apiserver oom_adj: -16
	I0912 18:42:13.109828   25774 kubeadm.go:640] restartCluster took 22.040060524s
	I0912 18:42:13.109838   25774 kubeadm.go:406] StartCluster complete in 22.067960392s
	I0912 18:42:13.109857   25774 settings.go:142] acquiring lock: {Name:mk701ee4b509c72ea6c30dd8b1ed35b0318b6f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:42:13.109946   25774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:42:13.110630   25774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3674/kubeconfig: {Name:mked094375583bdbe55c31d17add6f22f93c8430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:42:13.110874   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 18:42:13.110895   25774 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0912 18:42:13.113800   25774 out.go:177] * Enabled addons: 
	I0912 18:42:13.111083   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:13.111144   25774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:42:13.115047   25774 addons.go:502] enable addons completed in 4.162587ms: enabled=[]
	I0912 18:42:13.115270   25774 kapi.go:59] client config for multinode-348977: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/client.key", CAFile:"/home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 18:42:13.115629   25774 round_trippers.go:463] GET https://192.168.39.209:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 18:42:13.115643   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.115653   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.115662   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.118334   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.118358   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.118367   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.118375   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.118386   25774 round_trippers.go:580]     Content-Length: 291
	I0912 18:42:13.118396   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.118404   25774 round_trippers.go:580]     Audit-Id: 977f14e1-5a64-4189-aeab-98356e20ae68
	I0912 18:42:13.118415   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.118423   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.118453   25774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"689d8907-7a8c-41b5-a29a-3d911c1eccad","resourceVersion":"775","creationTimestamp":"2023-09-12T18:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0912 18:42:13.118646   25774 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-348977" context rescaled to 1 replicas
	I0912 18:42:13.118678   25774 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 18:42:13.120242   25774 out.go:177] * Verifying Kubernetes components...
	I0912 18:42:13.121523   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:13.243631   25774 command_runner.go:130] > apiVersion: v1
	I0912 18:42:13.243656   25774 command_runner.go:130] > data:
	I0912 18:42:13.243663   25774 command_runner.go:130] >   Corefile: |
	I0912 18:42:13.243669   25774 command_runner.go:130] >     .:53 {
	I0912 18:42:13.243674   25774 command_runner.go:130] >         log
	I0912 18:42:13.243681   25774 command_runner.go:130] >         errors
	I0912 18:42:13.243688   25774 command_runner.go:130] >         health {
	I0912 18:42:13.243695   25774 command_runner.go:130] >            lameduck 5s
	I0912 18:42:13.243700   25774 command_runner.go:130] >         }
	I0912 18:42:13.243708   25774 command_runner.go:130] >         ready
	I0912 18:42:13.243717   25774 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0912 18:42:13.243723   25774 command_runner.go:130] >            pods insecure
	I0912 18:42:13.243736   25774 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0912 18:42:13.243744   25774 command_runner.go:130] >            ttl 30
	I0912 18:42:13.243751   25774 command_runner.go:130] >         }
	I0912 18:42:13.243768   25774 command_runner.go:130] >         prometheus :9153
	I0912 18:42:13.243775   25774 command_runner.go:130] >         hosts {
	I0912 18:42:13.243783   25774 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0912 18:42:13.243791   25774 command_runner.go:130] >            fallthrough
	I0912 18:42:13.243797   25774 command_runner.go:130] >         }
	I0912 18:42:13.243806   25774 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0912 18:42:13.243816   25774 command_runner.go:130] >            max_concurrent 1000
	I0912 18:42:13.243822   25774 command_runner.go:130] >         }
	I0912 18:42:13.243832   25774 command_runner.go:130] >         cache 30
	I0912 18:42:13.243840   25774 command_runner.go:130] >         loop
	I0912 18:42:13.243849   25774 command_runner.go:130] >         reload
	I0912 18:42:13.243856   25774 command_runner.go:130] >         loadbalance
	I0912 18:42:13.243867   25774 command_runner.go:130] >     }
	I0912 18:42:13.243874   25774 command_runner.go:130] > kind: ConfigMap
	I0912 18:42:13.243887   25774 command_runner.go:130] > metadata:
	I0912 18:42:13.243899   25774 command_runner.go:130] >   creationTimestamp: "2023-09-12T18:38:05Z"
	I0912 18:42:13.243906   25774 command_runner.go:130] >   name: coredns
	I0912 18:42:13.243914   25774 command_runner.go:130] >   namespace: kube-system
	I0912 18:42:13.243921   25774 command_runner.go:130] >   resourceVersion: "402"
	I0912 18:42:13.243933   25774 command_runner.go:130] >   uid: 2097770d-506f-410e-985d-435a9559f646
	I0912 18:42:13.246000   25774 node_ready.go:35] waiting up to 6m0s for node "multinode-348977" to be "Ready" ...
	I0912 18:42:13.249599   25774 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0912 18:42:13.290690   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.290724   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.290733   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.290739   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.293505   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.293530   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.293550   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.293559   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.293566   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.293575   25774 round_trippers.go:580]     Audit-Id: 20c8a9a7-ec03-44a7-92e1-ea050ea6d00e
	I0912 18:42:13.293585   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.293593   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.293869   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.490632   25774 request.go:629] Waited for 196.383051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.490691   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.490698   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.490712   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.490725   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.494985   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:13.495009   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.495018   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.495032   25774 round_trippers.go:580]     Audit-Id: 5b0c52ae-aeab-43df-8442-d1ed1c51940b
	I0912 18:42:13.495040   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:13.495049   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.495062   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.495075   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.495435   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:13.996558   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:13.996593   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:13.996606   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:13.996614   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:13.999491   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:13.999512   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:13.999521   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:13.999529   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:13.999536   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:13.999544   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:13 GMT
	I0912 18:42:13.999551   25774 round_trippers.go:580]     Audit-Id: 71109b58-96c6-4648-9138-b568a04bbb01
	I0912 18:42:13.999560   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.000156   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:14.496863   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:14.496885   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:14.496893   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:14.496899   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:14.499602   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:14.499621   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:14.499631   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.499640   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:14.499648   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:14.499657   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:14.499670   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:14 GMT
	I0912 18:42:14.499679   25774 round_trippers.go:580]     Audit-Id: 2f6f7361-0c59-4c7d-8e76-3e32fdddd5c7
	I0912 18:42:14.499882   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:14.996605   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:14.996627   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:14.996635   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:14.996642   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:14.999646   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:14.999672   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:14.999682   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:14 GMT
	I0912 18:42:14.999688   25774 round_trippers.go:580]     Audit-Id: caa22104-8cb4-422b-9f8c-58a2074742d9
	I0912 18:42:14.999693   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:14.999698   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:14.999703   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:14.999712   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.000041   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"761","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I0912 18:42:15.496765   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.496794   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.496806   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.496816   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.499654   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:15.499672   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.499679   25774 round_trippers.go:580]     Audit-Id: 770efff3-c863-4db0-aed0-8b0e4f8a7f95
	I0912 18:42:15.499684   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.499689   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.499694   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.499699   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.499704   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.499880   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:15.500255   25774 node_ready.go:49] node "multinode-348977" has status "Ready":"True"
	I0912 18:42:15.500274   25774 node_ready.go:38] duration metric: took 2.254247875s waiting for node "multinode-348977" to be "Ready" ...
	I0912 18:42:15.500284   25774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:15.500345   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:15.500357   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.500368   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.500378   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.503778   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:15.503796   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.503806   25774 round_trippers.go:580]     Audit-Id: 8d81f8ed-046a-432d-a997-45f7f7e48558
	I0912 18:42:15.503816   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.503826   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.503835   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.503840   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.503845   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.506034   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"847"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83986 chars]
	I0912 18:42:15.508504   25774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:15.508567   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:15.508575   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.508582   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.508590   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.510706   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:15.510722   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.510731   25774 round_trippers.go:580]     Audit-Id: c9cdf90d-6c73-47f6-ae2c-89120b937596
	I0912 18:42:15.510739   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.510747   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.510755   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.510763   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.510771   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.511016   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:15.511382   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.511392   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.511399   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.511404   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.512961   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.512976   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.512984   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.512992   25774 round_trippers.go:580]     Audit-Id: cb80b5ee-0f06-45a7-a84b-162ab1d3304c
	I0912 18:42:15.513000   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.513006   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.513011   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.513016   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.513296   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:15.513696   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:15.513711   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.513722   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.513732   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.515569   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.515587   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.515597   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.515604   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.515609   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.515614   25774 round_trippers.go:580]     Audit-Id: d54f597f-34a0-4a2a-8871-cd8c93e54504
	I0912 18:42:15.515619   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.515625   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.515763   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:15.516137   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:15.516152   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:15.516161   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:15.516170   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:15.517873   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:15.517885   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:15.517891   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:15 GMT
	I0912 18:42:15.517896   25774 round_trippers.go:580]     Audit-Id: 5c3f42bb-785e-463b-8c12-afb92be30ba6
	I0912 18:42:15.517902   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:15.517911   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:15.517925   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:15.517932   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:15.518198   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:16.019174   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:16.019195   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.019203   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.019209   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.021794   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.021810   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.021824   25774 round_trippers.go:580]     Audit-Id: 9b21fe37-c8aa-4aa9-9f91-f2ff18584580
	I0912 18:42:16.021830   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.021842   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.021854   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.021862   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.021878   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.022301   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:16.022856   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:16.022871   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.022878   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.022884   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.024919   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.024939   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.024949   25774 round_trippers.go:580]     Audit-Id: cc43d89f-08b3-4bc9-a101-2bd12aabeb44
	I0912 18:42:16.024964   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.024977   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.024986   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.024993   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.025004   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.025365   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:16.518991   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:16.519035   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.519045   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.519051   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.521811   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.521835   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.521846   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.521855   25774 round_trippers.go:580]     Audit-Id: bdd70f78-8f74-4b34-8b50-3cdda2609256
	I0912 18:42:16.521865   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.521874   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.521887   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.521899   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.522407   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:16.522902   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:16.522916   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:16.522923   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:16.522928   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:16.525008   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:16.525028   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:16.525037   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:16.525046   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:16 GMT
	I0912 18:42:16.525062   25774 round_trippers.go:580]     Audit-Id: cf83a590-0ef9-44db-9645-64394ec5153e
	I0912 18:42:16.525070   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:16.525083   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:16.525094   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:16.525483   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.019210   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:17.019234   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.019242   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.019248   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.022180   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.022204   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.022214   25774 round_trippers.go:580]     Audit-Id: 6d0c1aa8-a449-4955-862b-564e063d1920
	I0912 18:42:17.022223   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.022233   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.022240   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.022249   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.022261   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.022965   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:17.023459   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:17.023473   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.023480   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.023486   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.025651   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.025670   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.025679   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.025688   25774 round_trippers.go:580]     Audit-Id: 1a36148e-df68-4ad3-ad5d-a35f5dec8c94
	I0912 18:42:17.025701   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.025709   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.025719   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.025727   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.026038   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.518682   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:17.518705   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.518716   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.518727   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.521775   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:17.521818   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.521829   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.521839   25774 round_trippers.go:580]     Audit-Id: f9326923-a290-4bad-abdf-2105fe92c5b4
	I0912 18:42:17.521848   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.521858   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.521868   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.521881   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.522258   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:17.522690   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:17.522702   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:17.522709   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:17.522715   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:17.525035   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:17.525050   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:17.525056   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:17 GMT
	I0912 18:42:17.525061   25774 round_trippers.go:580]     Audit-Id: 55b1395b-82d0-4899-9ce6-224c280343e7
	I0912 18:42:17.525066   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:17.525071   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:17.525077   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:17.525082   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:17.525229   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:17.525477   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:18.018887   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:18.018910   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.018919   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.018925   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.021874   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:18.021896   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.021906   25774 round_trippers.go:580]     Audit-Id: ac7164e8-d532-42d7-8e73-9713804614fe
	I0912 18:42:18.021916   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.021925   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.021934   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.021941   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.021946   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.022226   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:18.022712   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:18.022732   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.022739   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.022745   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.024717   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:18.024730   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.024736   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.024741   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.024746   25774 round_trippers.go:580]     Audit-Id: dfceff8d-4f6c-43e8-bf6a-8631bb9a3cce
	I0912 18:42:18.024751   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.024756   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.024761   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.025289   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:18.518950   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:18.518978   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.518986   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.518992   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.522096   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:18.522119   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.522132   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.522138   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.522143   25774 round_trippers.go:580]     Audit-Id: fa5de5c7-191b-4b2f-beff-478320c3a667
	I0912 18:42:18.522150   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.522158   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.522166   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.522703   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:18.523115   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:18.523126   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:18.523133   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:18.523139   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:18.525574   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:18.525593   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:18.525599   25774 round_trippers.go:580]     Audit-Id: ab6931c6-4f57-4c7b-b9aa-12d3508c6379
	I0912 18:42:18.525605   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:18.525610   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:18.525615   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:18.525620   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:18.525625   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:18 GMT
	I0912 18:42:18.525939   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.018605   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:19.018628   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.018636   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.018642   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.021331   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:19.021351   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.021359   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.021366   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.021374   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.021382   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.021392   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.021414   25774 round_trippers.go:580]     Audit-Id: 27419a73-847c-4f51-bd91-89e052f1edb4
	I0912 18:42:19.021912   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:19.022329   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:19.022342   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.022349   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.022355   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.024339   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:19.024359   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.024368   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.024378   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.024388   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.024395   25774 round_trippers.go:580]     Audit-Id: d84dc941-78b5-4672-a72d-ddd4cb0d7c29
	I0912 18:42:19.024409   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.024418   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.024715   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.519434   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:19.519470   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.519482   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.519491   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.522336   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:19.522358   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.522368   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.522376   25774 round_trippers.go:580]     Audit-Id: 3558f8cc-a921-4e89-84e1-6ac9cde9cd1e
	I0912 18:42:19.522385   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.522394   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.522405   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.522420   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.522736   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:19.523291   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:19.523305   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:19.523312   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:19.523317   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:19.525325   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:19.525338   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:19.525345   25774 round_trippers.go:580]     Audit-Id: aa86495f-0156-411b-a070-e22a45555259
	I0912 18:42:19.525350   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:19.525355   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:19.525361   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:19.525369   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:19.525377   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:19 GMT
	I0912 18:42:19.525749   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:19.526064   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:20.019472   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:20.019497   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.019509   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.019520   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.022460   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.022483   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.022493   25774 round_trippers.go:580]     Audit-Id: 7716efa4-da02-44ff-bfa0-9dcb7861e619
	I0912 18:42:20.022502   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.022510   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.022518   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.022526   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.022539   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.023124   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:20.023619   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:20.023637   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.023647   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.023654   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.026215   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.026229   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.026235   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.026240   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.026246   25774 round_trippers.go:580]     Audit-Id: fa7b0ca0-8540-4605-82ab-535bcc959a68
	I0912 18:42:20.026254   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.026263   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.026272   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.026670   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:20.519423   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:20.519453   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.519465   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.519475   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.522173   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:20.522191   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.522201   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.522207   25774 round_trippers.go:580]     Audit-Id: 0e759373-0227-4917-8a2a-ff09025291b0
	I0912 18:42:20.522212   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.522217   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.522222   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.522229   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.522492   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:20.523018   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:20.523039   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:20.523047   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:20.523055   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:20.525064   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:20.525083   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:20.525093   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:20.525101   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:20.525112   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:20 GMT
	I0912 18:42:20.525123   25774 round_trippers.go:580]     Audit-Id: e9114154-e1ac-42a2-857e-78c0a336e42e
	I0912 18:42:20.525134   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:20.525145   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:20.525296   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:21.018922   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:21.018949   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.018962   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.018985   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.021703   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.021729   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.021742   25774 round_trippers.go:580]     Audit-Id: 21cbb700-4503-4ee3-80d7-74b589e29284
	I0912 18:42:21.021750   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.021757   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.021768   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.021774   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.021784   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.022150   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:21.022724   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:21.022739   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.022746   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.022751   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.024834   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.024852   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.024872   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.024890   25774 round_trippers.go:580]     Audit-Id: eb216dd9-7990-4170-99be-ce935ee83b5b
	I0912 18:42:21.024898   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.024906   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.024912   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.024917   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.025328   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:21.519273   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:21.519299   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.519312   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.519321   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.521761   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.521787   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.521797   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.521804   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.521810   25774 round_trippers.go:580]     Audit-Id: df785d50-875d-4b1e-b8b8-fa57c4b91949
	I0912 18:42:21.521815   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.521820   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.521826   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.522091   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:21.522763   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:21.522783   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:21.522795   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:21.522804   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:21.525017   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:21.525035   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:21.525042   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:21.525047   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:21.525056   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:21 GMT
	I0912 18:42:21.525062   25774 round_trippers.go:580]     Audit-Id: 5f96ccf0-b25d-421e-8606-0097faa881df
	I0912 18:42:21.525067   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:21.525072   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:21.525186   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:22.018823   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:22.018846   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.018854   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.018861   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.021780   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.021805   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.021816   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.021823   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.021829   25774 round_trippers.go:580]     Audit-Id: 6d1f4828-16d5-4d36-9236-531d7c6463cb
	I0912 18:42:22.021834   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.021840   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.021845   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.022058   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:22.022637   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:22.022651   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.022658   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.022666   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.025235   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.025254   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.025264   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.025274   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.025289   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.025298   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.025307   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.025314   25774 round_trippers.go:580]     Audit-Id: 28240cd4-59d0-4af1-9428-aa9546fb2eb2
	I0912 18:42:22.025722   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:22.025994   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:22.519498   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:22.519533   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.519545   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.519615   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.522127   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:22.522155   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.522165   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.522173   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.522182   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.522190   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.522198   25774 round_trippers.go:580]     Audit-Id: 72500589-0df9-4e05-a284-4aab07bc1a90
	I0912 18:42:22.522205   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.522539   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:22.523066   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:22.523081   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:22.523088   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:22.523093   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:22.526415   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:22.526434   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:22.526444   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:22 GMT
	I0912 18:42:22.526452   25774 round_trippers.go:580]     Audit-Id: aba69887-23b0-4ad3-9591-255738e5c9cd
	I0912 18:42:22.526472   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:22.526480   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:22.526489   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:22.526497   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:22.526919   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:23.018648   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:23.018677   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.018688   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.018697   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.021371   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.021393   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.021401   25774 round_trippers.go:580]     Audit-Id: 308ed084-779c-4b7e-a6f9-8d335954f26d
	I0912 18:42:23.021409   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.021417   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.021426   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.021433   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.021442   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.021652   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:23.022261   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:23.022276   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.022283   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.022296   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.024418   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.024438   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.024447   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.024460   25774 round_trippers.go:580]     Audit-Id: f8385c41-1ac3-4937-8e08-0853d2f07b61
	I0912 18:42:23.024472   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.024480   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.024493   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.024502   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.024636   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:23.519355   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:23.519379   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.519387   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.519393   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.521923   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.521935   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.521941   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.521946   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.521952   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.521961   25774 round_trippers.go:580]     Audit-Id: 8df3aec7-70dd-46ad-9b0c-60e47006f66d
	I0912 18:42:23.521967   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.521972   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.522313   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:23.522786   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:23.522800   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:23.522807   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:23.522813   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:23.525090   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:23.525102   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:23.525108   25774 round_trippers.go:580]     Audit-Id: c5a51933-8848-4d9d-86b8-cfa9a1715c83
	I0912 18:42:23.525113   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:23.525118   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:23.525123   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:23.525128   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:23.525133   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:23 GMT
	I0912 18:42:23.525525   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.019244   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:24.019267   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.019275   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.019281   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.022149   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.022178   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.022184   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.022190   25774 round_trippers.go:580]     Audit-Id: 0e54f509-fea2-4447-95e7-0adef2cc4a71
	I0912 18:42:24.022195   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.022206   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.022214   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.022224   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.022566   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:24.023022   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:24.023034   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.023041   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.023047   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.025255   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.025267   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.025273   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.025278   25774 round_trippers.go:580]     Audit-Id: 4012be6b-a1d1-4822-8153-07785c0f087c
	I0912 18:42:24.025285   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.025290   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.025295   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.025300   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.025513   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.519223   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:24.519245   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.519253   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.519259   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.521740   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.521759   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.521769   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.521778   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.521806   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.521820   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.521828   25774 round_trippers.go:580]     Audit-Id: dd7800e3-c33b-4d72-b2a4-1f435c3588d6
	I0912 18:42:24.521838   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.522426   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:24.522882   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:24.522894   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:24.522904   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:24.522916   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:24.524935   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:24.524951   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:24.524966   25774 round_trippers.go:580]     Audit-Id: e70caa01-d717-4aa8-b453-e6b24304ecb7
	I0912 18:42:24.524975   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:24.524987   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:24.524992   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:24.524997   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:24.525003   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:24 GMT
	I0912 18:42:24.525181   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:24.525544   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:25.018770   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:25.018797   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.018809   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.018818   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.021606   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.021658   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.021669   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.021678   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.021685   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.021697   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.021709   25774 round_trippers.go:580]     Audit-Id: a707fbe4-5e5a-4a76-9552-ae18693b3ade
	I0912 18:42:25.021718   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.023780   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:25.024379   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:25.024398   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.024408   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.024426   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.026655   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.026674   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.026683   25774 round_trippers.go:580]     Audit-Id: 1d90b0fb-50df-4d38-bf46-db8bc42a342b
	I0912 18:42:25.026691   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.026701   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.026709   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.026718   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.026726   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.026889   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:25.519646   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:25.519678   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.519688   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.519694   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.522377   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.522402   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.522425   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.522434   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.522443   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.522455   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.522465   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.522475   25774 round_trippers.go:580]     Audit-Id: 92944b9c-849f-477a-8160-683445d1d4a8
	I0912 18:42:25.523055   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:25.523492   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:25.523505   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:25.523512   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:25.523517   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:25.526015   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:25.526034   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:25.526046   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:25.526055   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:25 GMT
	I0912 18:42:25.526071   25774 round_trippers.go:580]     Audit-Id: 1ee1e622-a49f-4d1d-bc0d-11e709dd8dda
	I0912 18:42:25.526079   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:25.526090   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:25.526096   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:25.526218   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:26.019099   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:26.019117   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.019125   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.019131   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.022416   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:26.022440   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.022450   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.022456   25774 round_trippers.go:580]     Audit-Id: 49021994-f425-426a-b645-d11b0bef6ff2
	I0912 18:42:26.022461   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.022469   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.022474   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.022480   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.023077   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:26.023486   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:26.023497   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.023504   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.023509   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.026077   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.026094   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.026104   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.026111   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.026118   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.026126   25774 round_trippers.go:580]     Audit-Id: 52a298de-ed90-4986-b7be-58541206edef
	I0912 18:42:26.026135   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.026145   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.026364   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:26.519031   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:26.519052   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.519060   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.519067   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.521656   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.521678   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.521686   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.521691   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.521696   25774 round_trippers.go:580]     Audit-Id: 0ff6011d-720e-449b-8eb1-46b0e14ea217
	I0912 18:42:26.521701   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.521706   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.521711   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.522134   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"766","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I0912 18:42:26.522540   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:26.522554   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:26.522560   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:26.522566   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:26.524786   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:26.524800   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:26.524806   25774 round_trippers.go:580]     Audit-Id: 2d945f25-12df-4442-8171-8110b7ec953e
	I0912 18:42:26.524811   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:26.524819   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:26.524827   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:26.524836   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:26.524845   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:26 GMT
	I0912 18:42:26.525119   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.018765   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:27.018794   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.018805   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.018815   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.025520   25774 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0912 18:42:27.025543   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.025553   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.025560   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.025567   25774 round_trippers.go:580]     Audit-Id: 6018ab1b-6d81-43ce-9088-f6d64d3ef8f9
	I0912 18:42:27.025576   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.025584   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.025591   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.025734   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"882","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6722 chars]
	I0912 18:42:27.026159   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.026170   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.026177   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.026183   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.029454   25774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 18:42:27.029469   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.029476   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.029481   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.029486   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.029491   25774 round_trippers.go:580]     Audit-Id: 058f3ee6-b56c-4d93-b76e-c92601975585
	I0912 18:42:27.029497   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.029506   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.029625   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.029892   25774 pod_ready.go:102] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"False"
	I0912 18:42:27.519325   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bsdfd
	I0912 18:42:27.519347   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.519355   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.519361   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.521737   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.521753   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.521760   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.521765   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.521771   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.521776   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.521781   25774 round_trippers.go:580]     Audit-Id: 0685a0df-f7eb-4093-ab97-48796cc84165
	I0912 18:42:27.521789   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.522261   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0912 18:42:27.522688   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.522699   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.522706   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.522712   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.524650   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.524669   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.524679   25774 round_trippers.go:580]     Audit-Id: fd53992f-f915-482b-91e2-7915a59fa965
	I0912 18:42:27.524688   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.524696   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.524707   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.524715   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.524739   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.525068   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.525328   25774 pod_ready.go:92] pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.525341   25774 pod_ready.go:81] duration metric: took 12.016818518s waiting for pod "coredns-5dd5756b68-bsdfd" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.525348   25774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.525392   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348977
	I0912 18:42:27.525399   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.525406   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.525411   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.527348   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.527362   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.527368   25774 round_trippers.go:580]     Audit-Id: 7c82a007-a8b9-458c-b72f-b5158f5d9f79
	I0912 18:42:27.527373   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.527379   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.527384   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.527392   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.527397   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.527569   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348977","namespace":"kube-system","uid":"1510b000-87cc-4e3c-9293-46db511afdb8","resourceVersion":"870","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.209:2379","kubernetes.io/config.hash":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.mirror":"e52d7a1e285cba1cf5f4d95c1d0e29e6","kubernetes.io/config.seen":"2023-09-12T18:37:56.784222349Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0912 18:42:27.527970   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.527988   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.527999   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.528008   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.529544   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.529556   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.529562   25774 round_trippers.go:580]     Audit-Id: 49c08a23-43c6-4b36-97cd-cdf623268d39
	I0912 18:42:27.529567   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.529572   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.529580   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.529585   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.529590   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.529750   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.530037   25774 pod_ready.go:92] pod "etcd-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.530051   25774 pod_ready.go:81] duration metric: took 4.69789ms waiting for pod "etcd-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.530068   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.530109   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348977
	I0912 18:42:27.530119   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.530129   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.530140   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.532020   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.532031   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.532036   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.532041   25774 round_trippers.go:580]     Audit-Id: 69b6d28a-cc81-4865-a415-98d5e4ab2e88
	I0912 18:42:27.532046   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.532052   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.532061   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.532066   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.532210   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348977","namespace":"kube-system","uid":"f540dfd0-b1d9-4e3f-b9ab-f02db770e920","resourceVersion":"857","creationTimestamp":"2023-09-12T18:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.209:8443","kubernetes.io/config.hash":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.mirror":"4abe28b137e1ba2381404609e97bb3f7","kubernetes.io/config.seen":"2023-09-12T18:38:05.461231178Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I0912 18:42:27.532613   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.532626   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.532633   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.532639   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.534337   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.534348   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.534354   25774 round_trippers.go:580]     Audit-Id: d8ed022c-9bdc-426c-8417-9bdbab3e0568
	I0912 18:42:27.534359   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.534364   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.534368   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.534373   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.534378   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.534556   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.534912   25774 pod_ready.go:92] pod "kube-apiserver-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.534932   25774 pod_ready.go:81] duration metric: took 4.857194ms waiting for pod "kube-apiserver-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.534941   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.535010   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348977
	I0912 18:42:27.535020   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.535026   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.535032   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.536478   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.536489   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.536498   25774 round_trippers.go:580]     Audit-Id: dc26cd07-5e2b-418c-81e4-ed7f5f4cea37
	I0912 18:42:27.536506   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.536520   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.536528   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.536540   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.536552   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.536872   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348977","namespace":"kube-system","uid":"930d0357-f21e-4a4e-8c3b-2cff3263568f","resourceVersion":"851","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.mirror":"407ffa10bfa8fa62381ddd301a0b2a3f","kubernetes.io/config.seen":"2023-09-12T18:37:56.784236763Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I0912 18:42:27.537190   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:27.537201   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.537208   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.537213   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.539168   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.539187   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.539196   25774 round_trippers.go:580]     Audit-Id: 96077995-eaf1-4ae5-816a-8a44fe54d0e0
	I0912 18:42:27.539205   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.539217   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.539225   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.539236   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.539247   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.539389   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:27.539705   25774 pod_ready.go:92] pod "kube-controller-manager-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.539721   25774 pod_ready.go:81] duration metric: took 4.774197ms waiting for pod "kube-controller-manager-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.539730   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.539778   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wfpr
	I0912 18:42:27.539785   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.539792   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.539797   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.541391   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.541405   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.541412   25774 round_trippers.go:580]     Audit-Id: d5988258-541c-4b62-b811-17340c9d4c61
	I0912 18:42:27.541417   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.541422   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.541429   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.541436   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.541443   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.541635   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wfpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"774a14f5-3c1d-4a3b-a265-290361f0fbe3","resourceVersion":"515","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0912 18:42:27.541939   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m02
	I0912 18:42:27.541951   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.541957   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.541962   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.543735   25774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 18:42:27.543753   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.543762   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.543770   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.543779   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.543787   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.543795   25774 round_trippers.go:580]     Audit-Id: c439d5a2-848f-462c-8997-8b09354202f6
	I0912 18:42:27.543803   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.544002   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m02","uid":"0a11e94b-756b-4c81-9734-627ddcc38b98","resourceVersion":"581","creationTimestamp":"2023-09-12T18:39:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3266 chars]
	I0912 18:42:27.544241   25774 pod_ready.go:92] pod "kube-proxy-2wfpr" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.544255   25774 pod_ready.go:81] duration metric: took 4.520204ms waiting for pod "kube-proxy-2wfpr" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.544264   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.719627   25774 request.go:629] Waited for 175.317143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:27.719692   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvnqz
	I0912 18:42:27.719702   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.719713   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.719724   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.722697   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.722720   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.722730   25774 round_trippers.go:580]     Audit-Id: d3483f79-1d80-489b-9726-e0bcfc0757be
	I0912 18:42:27.722738   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.722746   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.722754   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.722762   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.722770   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.722965   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvnqz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d610f9be-c231-4aae-9870-e627ce41bf23","resourceVersion":"736","creationTimestamp":"2023-09-12T18:39:59Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:39:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0912 18:42:27.919793   25774 request.go:629] Waited for 196.375026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:27.919854   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977-m03
	I0912 18:42:27.919859   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:27.919866   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:27.919873   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:27.922352   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:27.922369   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:27.922376   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:27.922381   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:27 GMT
	I0912 18:42:27.922386   25774 round_trippers.go:580]     Audit-Id: 18dc451a-97b4-4669-a8d1-fe83de2c3208
	I0912 18:42:27.922391   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:27.922396   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:27.922401   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:27.922552   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977-m03","uid":"03d033eb-43a1-4b37-a2a0-6de70662f3e7","resourceVersion":"753","creationTimestamp":"2023-09-12T18:40:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:40:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0912 18:42:27.922880   25774 pod_ready.go:92] pod "kube-proxy-fvnqz" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:27.922898   25774 pod_ready.go:81] duration metric: took 378.627886ms waiting for pod "kube-proxy-fvnqz" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:27.922913   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.120317   25774 request.go:629] Waited for 197.342872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:28.120378   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp457
	I0912 18:42:28.120383   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.120397   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.120412   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.123127   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.123147   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.123154   25774 round_trippers.go:580]     Audit-Id: 091153d1-359b-4f12-a3a3-ccdbdc81297d
	I0912 18:42:28.123160   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.123165   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.123170   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.123175   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.123181   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.123500   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gp457","generateName":"kube-proxy-","namespace":"kube-system","uid":"39d70e08-cba7-4545-a6eb-a2e9152458dc","resourceVersion":"844","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a36b217e-71d0-46af-bc79-d8b5d1e320de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a36b217e-71d0-46af-bc79-d8b5d1e320de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I0912 18:42:28.320266   25774 request.go:629] Waited for 196.341863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.320310   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.320315   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.320322   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.320328   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.322784   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.322801   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.322807   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.322812   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.322817   25774 round_trippers.go:580]     Audit-Id: 7a971df4-048d-411f-84d5-edeca5d0a808
	I0912 18:42:28.322822   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.322830   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.322838   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.323280   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:28.323558   25774 pod_ready.go:92] pod "kube-proxy-gp457" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:28.323569   25774 pod_ready.go:81] duration metric: took 400.650162ms waiting for pod "kube-proxy-gp457" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.323577   25774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.519997   25774 request.go:629] Waited for 196.359932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:28.520052   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348977
	I0912 18:42:28.520057   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.520064   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.520070   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.522614   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.522643   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.522651   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.522657   25774 round_trippers.go:580]     Audit-Id: 81598ada-aa63-48f8-bbb7-bb3b59d03fca
	I0912 18:42:28.522663   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.522671   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.522676   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.522682   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.523030   25774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348977","namespace":"kube-system","uid":"69ef187d-8c5d-4b26-861e-4a2178c309e7","resourceVersion":"850","creationTimestamp":"2023-09-12T18:38:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.mirror":"bb3d3a4075cd4b7c2e743b506f392839","kubernetes.io/config.seen":"2023-09-12T18:37:56.784237754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I0912 18:42:28.719797   25774 request.go:629] Waited for 196.343169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.719852   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes/multinode-348977
	I0912 18:42:28.719857   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.719864   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.719870   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.722628   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:28.722647   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.722657   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.722665   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.722670   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.722690   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.722704   25774 round_trippers.go:580]     Audit-Id: 09261421-b5b9-47f1-8400-375ba280b4aa
	I0912 18:42:28.722709   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.723026   25774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-12T18:38:01Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I0912 18:42:28.723300   25774 pod_ready.go:92] pod "kube-scheduler-multinode-348977" in "kube-system" namespace has status "Ready":"True"
	I0912 18:42:28.723312   25774 pod_ready.go:81] duration metric: took 399.729056ms waiting for pod "kube-scheduler-multinode-348977" in "kube-system" namespace to be "Ready" ...
	I0912 18:42:28.723321   25774 pod_ready.go:38] duration metric: took 13.223027127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:42:28.723336   25774 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:42:28.723377   25774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:28.736121   25774 command_runner.go:130] > 1613
	I0912 18:42:28.736176   25774 api_server.go:72] duration metric: took 15.617469319s to wait for apiserver process to appear ...
	I0912 18:42:28.736186   25774 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:42:28.736202   25774 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:42:28.742568   25774 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:42:28.742691   25774 round_trippers.go:463] GET https://192.168.39.209:8443/version
	I0912 18:42:28.742706   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.742717   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.742742   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.743597   25774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0912 18:42:28.743611   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.743617   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.743622   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.743628   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.743635   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.743644   25774 round_trippers.go:580]     Content-Length: 263
	I0912 18:42:28.743652   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.743664   25774 round_trippers.go:580]     Audit-Id: 1f819583-ec91-4248-8d1d-f0faa5cdc977
	I0912 18:42:28.743686   25774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 18:42:28.743734   25774 api_server.go:141] control plane version: v1.28.1
	I0912 18:42:28.743746   25774 api_server.go:131] duration metric: took 7.554171ms to wait for apiserver health ...
	I0912 18:42:28.743753   25774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:42:28.920155   25774 request.go:629] Waited for 176.33099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:28.920222   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:28.920228   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:28.920239   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:28.920248   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:28.924609   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:28.924625   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:28.924631   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:28.924637   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:28.924642   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:28.924647   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:28.924652   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:28 GMT
	I0912 18:42:28.924657   25774 round_trippers.go:580]     Audit-Id: 61199736-c30a-4f20-a0fe-85ab567c6748
	I0912 18:42:28.926078   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0912 18:42:28.928488   25774 system_pods.go:59] 12 kube-system pods found
	I0912 18:42:28.928508   25774 system_pods.go:61] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running
	I0912 18:42:28.928516   25774 system_pods.go:61] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running
	I0912 18:42:28.928521   25774 system_pods.go:61] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:28.928529   25774 system_pods.go:61] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:28.928543   25774 system_pods.go:61] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running
	I0912 18:42:28.928549   25774 system_pods.go:61] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running
	I0912 18:42:28.928556   25774 system_pods.go:61] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running
	I0912 18:42:28.928564   25774 system_pods.go:61] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:28.928568   25774 system_pods.go:61] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:28.928575   25774 system_pods.go:61] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running
	I0912 18:42:28.928579   25774 system_pods.go:61] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running
	I0912 18:42:28.928583   25774 system_pods.go:61] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running
	I0912 18:42:28.928589   25774 system_pods.go:74] duration metric: took 184.827503ms to wait for pod list to return data ...
	I0912 18:42:28.928596   25774 default_sa.go:34] waiting for default service account to be created ...
	I0912 18:42:29.120018   25774 request.go:629] Waited for 191.358708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/default/serviceaccounts
	I0912 18:42:29.120097   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/default/serviceaccounts
	I0912 18:42:29.120104   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.120112   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.120126   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.123049   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:29.123069   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.123079   25774 round_trippers.go:580]     Content-Length: 261
	I0912 18:42:29.123088   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.123097   25774 round_trippers.go:580]     Audit-Id: 0cac5d85-cbe1-4c12-91ee-4a50deb388eb
	I0912 18:42:29.123106   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.123115   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.123122   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.123128   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.123154   25774 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"55ef2ca3-3fa0-482c-9704-129c61fdc121","resourceVersion":"365","creationTimestamp":"2023-09-12T18:38:17Z"}}]}
	I0912 18:42:29.123368   25774 default_sa.go:45] found service account: "default"
	I0912 18:42:29.123387   25774 default_sa.go:55] duration metric: took 194.785544ms for default service account to be created ...
	I0912 18:42:29.123402   25774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 18:42:29.319837   25774 request.go:629] Waited for 196.373018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:29.319891   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/namespaces/kube-system/pods
	I0912 18:42:29.319922   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.319951   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.319971   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.324234   25774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 18:42:29.324257   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.324267   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.324275   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.324283   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.324293   25774 round_trippers.go:580]     Audit-Id: 30f84bd7-2fdc-4719-8ffc-f3f8ff44f576
	I0912 18:42:29.324301   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.324310   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.325766   25774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bsdfd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b14b1b22-9cc1-44da-bab6-32ec6c417f9a","resourceVersion":"885","creationTimestamp":"2023-09-12T18:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fe4a9049-5177-4155-921b-229361fca251","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T18:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4a9049-5177-4155-921b-229361fca251\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0912 18:42:29.328222   25774 system_pods.go:86] 12 kube-system pods found
	I0912 18:42:29.328242   25774 system_pods.go:89] "coredns-5dd5756b68-bsdfd" [b14b1b22-9cc1-44da-bab6-32ec6c417f9a] Running
	I0912 18:42:29.328247   25774 system_pods.go:89] "etcd-multinode-348977" [1510b000-87cc-4e3c-9293-46db511afdb8] Running
	I0912 18:42:29.328252   25774 system_pods.go:89] "kindnet-rzmdg" [3018cc32-2f0e-4002-b3e5-5860047cc049] Running
	I0912 18:42:29.328257   25774 system_pods.go:89] "kindnet-vw7cg" [72d722e2-6010-4083-b225-cd2c84e7f205] Running
	I0912 18:42:29.328263   25774 system_pods.go:89] "kindnet-xs7zp" [631147b9-b008-4c63-8b6a-20f317337ca8] Running
	I0912 18:42:29.328270   25774 system_pods.go:89] "kube-apiserver-multinode-348977" [f540dfd0-b1d9-4e3f-b9ab-f02db770e920] Running
	I0912 18:42:29.328277   25774 system_pods.go:89] "kube-controller-manager-multinode-348977" [930d0357-f21e-4a4e-8c3b-2cff3263568f] Running
	I0912 18:42:29.328292   25774 system_pods.go:89] "kube-proxy-2wfpr" [774a14f5-3c1d-4a3b-a265-290361f0fbe3] Running
	I0912 18:42:29.328298   25774 system_pods.go:89] "kube-proxy-fvnqz" [d610f9be-c231-4aae-9870-e627ce41bf23] Running
	I0912 18:42:29.328302   25774 system_pods.go:89] "kube-proxy-gp457" [39d70e08-cba7-4545-a6eb-a2e9152458dc] Running
	I0912 18:42:29.328307   25774 system_pods.go:89] "kube-scheduler-multinode-348977" [69ef187d-8c5d-4b26-861e-4a2178c309e7] Running
	I0912 18:42:29.328310   25774 system_pods.go:89] "storage-provisioner" [dbe2e40d-63bd-4acd-a9cd-c34fd229887e] Running
	I0912 18:42:29.328316   25774 system_pods.go:126] duration metric: took 204.909135ms to wait for k8s-apps to be running ...
	I0912 18:42:29.328325   25774 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 18:42:29.328370   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:29.341359   25774 system_svc.go:56] duration metric: took 13.030228ms WaitForService to wait for kubelet.
	I0912 18:42:29.341381   25774 kubeadm.go:581] duration metric: took 16.222676844s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 18:42:29.341399   25774 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:42:29.519828   25774 request.go:629] Waited for 178.364112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:29.519911   25774 round_trippers.go:463] GET https://192.168.39.209:8443/api/v1/nodes
	I0912 18:42:29.519918   25774 round_trippers.go:469] Request Headers:
	I0912 18:42:29.519929   25774 round_trippers.go:473]     Accept: application/json, */*
	I0912 18:42:29.519940   25774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 18:42:29.522725   25774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 18:42:29.522742   25774 round_trippers.go:577] Response Headers:
	I0912 18:42:29.522749   25774 round_trippers.go:580]     Audit-Id: e00e13ba-2677-4829-b344-8ada38a7e166
	I0912 18:42:29.522755   25774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 18:42:29.522762   25774 round_trippers.go:580]     Content-Type: application/json
	I0912 18:42:29.522770   25774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 221e9242-f2a2-4ef6-9d52-51ad0b7e52c1
	I0912 18:42:29.522782   25774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c749f2ef-d796-4f0c-9353-8b10530cf728
	I0912 18:42:29.522797   25774 round_trippers.go:580]     Date: Tue, 12 Sep 2023 18:42:29 GMT
	I0912 18:42:29.523112   25774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-348977","uid":"95402ed4-5ab2-4d51-813a-1d4efb14142f","resourceVersion":"847","creationTimestamp":"2023-09-12T18:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7fcf473f700c1ee60c8afd1005162a3d3f02aa75","minikube.k8s.io/name":"multinode-348977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T18_38_06_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
	I0912 18:42:29.523631   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523649   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523658   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523664   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523677   25774 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:42:29.523686   25774 node_conditions.go:123] node cpu capacity is 2
	I0912 18:42:29.523694   25774 node_conditions.go:105] duration metric: took 182.290333ms to run NodePressure ...
	I0912 18:42:29.523707   25774 start.go:228] waiting for startup goroutines ...
	I0912 18:42:29.523715   25774 start.go:233] waiting for cluster config update ...
	I0912 18:42:29.523724   25774 start.go:242] writing updated cluster config ...
	I0912 18:42:29.524158   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:29.524248   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:29.527653   25774 out.go:177] * Starting worker node multinode-348977-m02 in cluster multinode-348977
	I0912 18:42:29.529157   25774 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:42:29.529180   25774 cache.go:57] Caching tarball of preloaded images
	I0912 18:42:29.529277   25774 preload.go:174] Found /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:42:29.529288   25774 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:42:29.529376   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:29.529545   25774 start.go:365] acquiring machines lock for multinode-348977-m02: {Name:mkb814e9f5e9709f943ea910e0cc7d91215dc74f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:42:29.529588   25774 start.go:369] acquired machines lock for "multinode-348977-m02" in 23.462µs
	I0912 18:42:29.529606   25774 start.go:96] Skipping create...Using existing machine configuration
	I0912 18:42:29.529615   25774 fix.go:54] fixHost starting: m02
	I0912 18:42:29.529896   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:42:29.529918   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:29.543842   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0912 18:42:29.544256   25774 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:29.544682   25774 main.go:141] libmachine: Using API Version  1
	I0912 18:42:29.544708   25774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:29.544985   25774 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:29.545132   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:29.545265   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetState
	I0912 18:42:29.546866   25774 fix.go:102] recreateIfNeeded on multinode-348977-m02: state=Stopped err=<nil>
	I0912 18:42:29.546891   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	W0912 18:42:29.547062   25774 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 18:42:29.548960   25774 out.go:177] * Restarting existing kvm2 VM for "multinode-348977-m02" ...
	I0912 18:42:29.550233   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .Start
	I0912 18:42:29.550396   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring networks are active...
	I0912 18:42:29.551115   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring network default is active
	I0912 18:42:29.551433   25774 main.go:141] libmachine: (multinode-348977-m02) Ensuring network mk-multinode-348977 is active
	I0912 18:42:29.551771   25774 main.go:141] libmachine: (multinode-348977-m02) Getting domain xml...
	I0912 18:42:29.552344   25774 main.go:141] libmachine: (multinode-348977-m02) Creating domain...
	I0912 18:42:30.767498   25774 main.go:141] libmachine: (multinode-348977-m02) Waiting to get IP...
	I0912 18:42:30.768372   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:30.768756   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:30.768796   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:30.768731   26026 retry.go:31] will retry after 235.940556ms: waiting for machine to come up
	I0912 18:42:31.006160   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.006647   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.006677   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.006603   26026 retry.go:31] will retry after 364.360851ms: waiting for machine to come up
	I0912 18:42:31.372196   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.372728   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.372759   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.372673   26026 retry.go:31] will retry after 381.551229ms: waiting for machine to come up
	I0912 18:42:31.756143   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:31.756569   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:31.756596   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:31.756516   26026 retry.go:31] will retry after 467.043566ms: waiting for machine to come up
	I0912 18:42:32.225092   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:32.225542   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:32.225565   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:32.225522   26026 retry.go:31] will retry after 717.918575ms: waiting for machine to come up
	I0912 18:42:32.944665   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:32.944984   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:32.945013   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:32.944938   26026 retry.go:31] will retry after 777.588344ms: waiting for machine to come up
	I0912 18:42:33.723615   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:33.724005   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:33.724028   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:33.723989   26026 retry.go:31] will retry after 1.005231305s: waiting for machine to come up
	I0912 18:42:34.730358   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:34.730734   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:34.730770   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:34.730686   26026 retry.go:31] will retry after 958.78563ms: waiting for machine to come up
	I0912 18:42:35.690983   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:35.691399   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:35.691421   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:35.691373   26026 retry.go:31] will retry after 1.539184895s: waiting for machine to come up
	I0912 18:42:37.231731   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:37.232165   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:37.232197   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:37.232143   26026 retry.go:31] will retry after 2.237252703s: waiting for machine to come up
	I0912 18:42:39.472512   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:39.472959   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:39.473011   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:39.472905   26026 retry.go:31] will retry after 2.152692302s: waiting for machine to come up
	I0912 18:42:41.627680   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:41.628098   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:41.628133   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:41.628032   26026 retry.go:31] will retry after 2.890854285s: waiting for machine to come up
	I0912 18:42:44.521895   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:44.522238   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | unable to find current IP address of domain multinode-348977-m02 in network mk-multinode-348977
	I0912 18:42:44.522262   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | I0912 18:42:44.522192   26026 retry.go:31] will retry after 2.979799431s: waiting for machine to come up
	I0912 18:42:47.505585   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.506105   25774 main.go:141] libmachine: (multinode-348977-m02) Found IP for machine: 192.168.39.55
	I0912 18:42:47.506134   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has current primary IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.506144   25774 main.go:141] libmachine: (multinode-348977-m02) Reserving static IP address...
	I0912 18:42:47.506564   25774 main.go:141] libmachine: (multinode-348977-m02) Reserved static IP address: 192.168.39.55
	I0912 18:42:47.506615   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "multinode-348977-m02", mac: "52:54:00:fb:c0:ce", ip: "192.168.39.55"} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.506635   25774 main.go:141] libmachine: (multinode-348977-m02) Waiting for SSH to be available...
	I0912 18:42:47.506659   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | skip adding static IP to network mk-multinode-348977 - found existing host DHCP lease matching {name: "multinode-348977-m02", mac: "52:54:00:fb:c0:ce", ip: "192.168.39.55"}
	I0912 18:42:47.506681   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Getting to WaitForSSH function...
	I0912 18:42:47.508611   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.508965   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.508992   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.509119   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Using SSH client type: external
	I0912 18:42:47.509153   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa (-rw-------)
	I0912 18:42:47.509178   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:42:47.509190   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | About to run SSH command:
	I0912 18:42:47.509201   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | exit 0
	I0912 18:42:47.594719   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | SSH cmd err, output: <nil>: 
	I0912 18:42:47.595034   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetConfigRaw
	I0912 18:42:47.595656   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:47.598153   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.598542   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.598576   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.598809   25774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/multinode-348977/config.json ...
	I0912 18:42:47.599008   25774 machine.go:88] provisioning docker machine ...
	I0912 18:42:47.599027   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:47.599233   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.599393   25774 buildroot.go:166] provisioning hostname "multinode-348977-m02"
	I0912 18:42:47.599410   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.599573   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.601705   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.602082   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.602107   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.602240   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:47.602444   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.602620   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.602777   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:47.602919   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:47.603241   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:47.603262   25774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348977-m02 && echo "multinode-348977-m02" | sudo tee /etc/hostname
	I0912 18:42:47.727967   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348977-m02
	
	I0912 18:42:47.727992   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.730980   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.731324   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.731357   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.731546   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:47.731734   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.731942   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:47.732071   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:47.732251   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:47.732720   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:47.732751   25774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348977-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348977-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348977-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:42:47.851882   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:42:47.851910   25774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3674/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3674/.minikube}
	I0912 18:42:47.851930   25774 buildroot.go:174] setting up certificates
	I0912 18:42:47.851944   25774 provision.go:83] configureAuth start
	I0912 18:42:47.851961   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetMachineName
	I0912 18:42:47.852222   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:47.854839   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.855194   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.855226   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.855337   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:47.857401   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.857747   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:47.857778   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:47.857894   25774 provision.go:138] copyHostCerts
	I0912 18:42:47.857926   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:42:47.857965   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem, removing ...
	I0912 18:42:47.857979   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem
	I0912 18:42:47.858051   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/cert.pem (1123 bytes)
	I0912 18:42:47.858137   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:42:47.858162   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem, removing ...
	I0912 18:42:47.858172   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem
	I0912 18:42:47.858209   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/key.pem (1675 bytes)
	I0912 18:42:47.858270   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:42:47.858293   25774 exec_runner.go:144] found /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem, removing ...
	I0912 18:42:47.858300   25774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem
	I0912 18:42:47.858334   25774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3674/.minikube/ca.pem (1078 bytes)
	I0912 18:42:47.858394   25774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca-key.pem org=jenkins.multinode-348977-m02 san=[192.168.39.55 192.168.39.55 localhost 127.0.0.1 minikube multinode-348977-m02]
	I0912 18:42:48.213648   25774 provision.go:172] copyRemoteCerts
	I0912 18:42:48.213711   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:42:48.213739   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.216496   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.216875   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.216910   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.217086   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.217304   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.217440   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.217540   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:48.299272   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 18:42:48.299343   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 18:42:48.323067   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 18:42:48.323135   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0912 18:42:48.346811   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 18:42:48.346879   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 18:42:48.369076   25774 provision.go:86] duration metric: configureAuth took 517.116419ms
	I0912 18:42:48.369101   25774 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:42:48.369320   25774 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:42:48.369360   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:48.369693   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.372404   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.372825   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.372851   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.373017   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.373198   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.373387   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.373552   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.373737   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.374095   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.374108   25774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 18:42:48.484155   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 18:42:48.484175   25774 buildroot.go:70] root file system type: tmpfs
	I0912 18:42:48.484269   25774 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 18:42:48.484284   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.486806   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.487163   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.487199   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.487339   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.487537   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.487696   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.487860   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.487993   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.488283   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.488362   25774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.209"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 18:42:48.611547   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.209
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 18:42:48.611575   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:48.614223   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.614651   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:48.614685   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:48.614810   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:48.615012   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.615161   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:48.615320   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:48.615531   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:48.615880   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:48.615910   25774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 18:42:49.491968   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 18:42:49.491990   25774 machine.go:91] provisioned docker machine in 1.892968996s
	I0912 18:42:49.492001   25774 start.go:300] post-start starting for "multinode-348977-m02" (driver="kvm2")
	I0912 18:42:49.492011   25774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:42:49.492033   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.492389   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:42:49.492428   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.495587   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.496039   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.496074   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.496235   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.496409   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.496557   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.496709   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.580048   25774 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:42:49.584124   25774 command_runner.go:130] > NAME=Buildroot
	I0912 18:42:49.584146   25774 command_runner.go:130] > VERSION=2021.02.12-1-gaa74cea-dirty
	I0912 18:42:49.584153   25774 command_runner.go:130] > ID=buildroot
	I0912 18:42:49.584161   25774 command_runner.go:130] > VERSION_ID=2021.02.12
	I0912 18:42:49.584168   25774 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0912 18:42:49.584316   25774 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:42:49.584334   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/addons for local assets ...
	I0912 18:42:49.584409   25774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3674/.minikube/files for local assets ...
	I0912 18:42:49.584509   25774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> 108482.pem in /etc/ssl/certs
	I0912 18:42:49.584523   25774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem -> /etc/ssl/certs/108482.pem
	I0912 18:42:49.584635   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 18:42:49.592773   25774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/ssl/certs/108482.pem --> /etc/ssl/certs/108482.pem (1708 bytes)
	I0912 18:42:49.617623   25774 start.go:303] post-start completed in 125.608825ms
	I0912 18:42:49.617646   25774 fix.go:56] fixHost completed within 20.088031606s
	I0912 18:42:49.617665   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.620435   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.620845   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.620869   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.621069   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.621262   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.621404   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.621570   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.621758   25774 main.go:141] libmachine: Using SSH client type: native
	I0912 18:42:49.622052   25774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0912 18:42:49.622063   25774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 18:42:49.731465   25774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694544169.678612481
	
	I0912 18:42:49.731485   25774 fix.go:206] guest clock: 1694544169.678612481
	I0912 18:42:49.731492   25774 fix.go:219] Guest: 2023-09-12 18:42:49.678612481 +0000 UTC Remote: 2023-09-12 18:42:49.617649209 +0000 UTC m=+83.981581209 (delta=60.963272ms)
	I0912 18:42:49.731504   25774 fix.go:190] guest clock delta is within tolerance: 60.963272ms
	I0912 18:42:49.731513   25774 start.go:83] releasing machines lock for "multinode-348977-m02", held for 20.201911405s
	I0912 18:42:49.731541   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.731783   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:42:49.734410   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.734890   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.734925   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.737058   25774 out.go:177] * Found network options:
	I0912 18:42:49.738484   25774 out.go:177]   - NO_PROXY=192.168.39.209
	W0912 18:42:49.739948   25774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 18:42:49.739975   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740468   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740681   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:42:49.740737   25774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:42:49.740784   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	W0912 18:42:49.740858   25774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 18:42:49.740942   25774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 18:42:49.740979   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:42:49.743639   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.743671   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744084   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.744116   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744145   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:42:41 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:42:49.744165   25774 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:42:49.744240   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.744410   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:42:49.744416   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.744596   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.744599   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:42:49.744774   25774 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:42:49.744769   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.744886   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:42:49.854820   25774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 18:42:49.855190   25774 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 18:42:49.855233   25774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:42:49.855293   25774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:42:49.872592   25774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0912 18:42:49.872900   25774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:42:49.872923   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:42:49.873033   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:42:49.890584   25774 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0912 18:42:49.891097   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:42:49.901256   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:42:49.911217   25774 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:42:49.911258   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:42:49.921924   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:42:49.932287   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:42:49.942216   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:42:49.952004   25774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:42:49.962020   25774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:42:49.971792   25774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:42:49.980297   25774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 18:42:49.980393   25774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:42:49.989544   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:50.094046   25774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:42:50.115209   25774 start.go:469] detecting cgroup driver to use...
	I0912 18:42:50.115284   25774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 18:42:50.127032   25774 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0912 18:42:50.127923   25774 command_runner.go:130] > [Unit]
	I0912 18:42:50.127939   25774 command_runner.go:130] > Description=Docker Application Container Engine
	I0912 18:42:50.127944   25774 command_runner.go:130] > Documentation=https://docs.docker.com
	I0912 18:42:50.127950   25774 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0912 18:42:50.127955   25774 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0912 18:42:50.127962   25774 command_runner.go:130] > StartLimitBurst=3
	I0912 18:42:50.127966   25774 command_runner.go:130] > StartLimitIntervalSec=60
	I0912 18:42:50.127971   25774 command_runner.go:130] > [Service]
	I0912 18:42:50.127976   25774 command_runner.go:130] > Type=notify
	I0912 18:42:50.127985   25774 command_runner.go:130] > Restart=on-failure
	I0912 18:42:50.127996   25774 command_runner.go:130] > Environment=NO_PROXY=192.168.39.209
	I0912 18:42:50.128008   25774 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0912 18:42:50.128019   25774 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0912 18:42:50.128032   25774 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0912 18:42:50.128039   25774 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0912 18:42:50.128046   25774 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0912 18:42:50.128053   25774 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0912 18:42:50.128062   25774 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0912 18:42:50.128071   25774 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0912 18:42:50.128083   25774 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0912 18:42:50.128090   25774 command_runner.go:130] > ExecStart=
	I0912 18:42:50.128114   25774 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0912 18:42:50.128127   25774 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0912 18:42:50.128134   25774 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0912 18:42:50.128140   25774 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0912 18:42:50.128145   25774 command_runner.go:130] > LimitNOFILE=infinity
	I0912 18:42:50.128149   25774 command_runner.go:130] > LimitNPROC=infinity
	I0912 18:42:50.128154   25774 command_runner.go:130] > LimitCORE=infinity
	I0912 18:42:50.128161   25774 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0912 18:42:50.128168   25774 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0912 18:42:50.128178   25774 command_runner.go:130] > TasksMax=infinity
	I0912 18:42:50.128185   25774 command_runner.go:130] > TimeoutStartSec=0
	I0912 18:42:50.128197   25774 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0912 18:42:50.128208   25774 command_runner.go:130] > Delegate=yes
	I0912 18:42:50.128218   25774 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0912 18:42:50.128256   25774 command_runner.go:130] > KillMode=process
	I0912 18:42:50.128266   25774 command_runner.go:130] > [Install]
	I0912 18:42:50.128275   25774 command_runner.go:130] > WantedBy=multi-user.target
	I0912 18:42:50.128490   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:42:50.140278   25774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 18:42:50.156780   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 18:42:50.169134   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:42:50.180997   25774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:42:50.207570   25774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:42:50.221227   25774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:42:50.237822   25774 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0912 18:42:50.238226   25774 ssh_runner.go:195] Run: which cri-dockerd
	I0912 18:42:50.241514   25774 command_runner.go:130] > /usr/bin/cri-dockerd
	I0912 18:42:50.241908   25774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 18:42:50.250024   25774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 18:42:50.269261   25774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 18:42:50.375301   25774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 18:42:50.482348   25774 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 18:42:50.482378   25774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 18:42:50.499144   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:50.600957   25774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 18:42:52.035593   25774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4345964s)
	I0912 18:42:52.035674   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:42:52.134695   25774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 18:42:52.252441   25774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 18:42:52.363710   25774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:42:52.471147   25774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 18:42:52.484624   25774 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0912 18:42:52.486932   25774 out.go:177] 
	W0912 18:42:52.488525   25774 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0912 18:42:52.488540   25774 out.go:239] * 
	W0912 18:42:52.489285   25774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 18:42:52.491138   25774 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-12 18:41:37 UTC, ends at Tue 2023-09-12 18:42:56 UTC. --
	Sep 12 18:42:11 multinode-348977 dockerd[811]: time="2023-09-12T18:42:11.024190283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:11 multinode-348977 dockerd[811]: time="2023-09-12T18:42:11.024199800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:13 multinode-348977 cri-dockerd[1025]: time="2023-09-12T18:42:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3ec8e106fac65fe861579e38e010f99847ddca3d0f37581a183693b9bcd14d4/resolv.conf as [nameserver 192.168.122.1]"
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.447136452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.451788377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.451882752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:13 multinode-348977 dockerd[811]: time="2023-09-12T18:42:13.452030611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.061851006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.062195884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.062216451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.062282116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063143164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063368836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063418603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.063433894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 cri-dockerd[1025]: time="2023-09-12T18:42:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9df497b51c1b90fe37ef8ae9f7ebb72d0151d57610fa49d4fe1fd419f9ce2ef4/resolv.conf as [nameserver 192.168.122.1]"
	Sep 12 18:42:25 multinode-348977 cri-dockerd[1025]: time="2023-09-12T18:42:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3688028545c3998c5f75e4d5e6621c4c5a0e73bbe71c9395d6387d5b29ed167d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.794695839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.794887847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.795127003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.814082052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925129152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925414025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925443070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:42:25 multinode-348977 dockerd[811]: time="2023-09-12T18:42:25.925457752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	3d839fe423398       8c811b4aec35f                                                                                         31 seconds ago      Running             busybox                   1                   3688028545c39
	0fd05c38ac077       ead0a4a53df89                                                                                         31 seconds ago      Running             coredns                   1                   9df497b51c1b9
	c71d7c92a0630       c7d1297425461                                                                                         43 seconds ago      Running             kindnet-cni               1                   c3ec8e106fac6
	a2e119ff0bf65       6e38f40d628db                                                                                         46 seconds ago      Running             storage-provisioner       1                   25ec6eea906d3
	8983d7a34d7e1       6cdbabde3874e                                                                                         47 seconds ago      Running             kube-proxy                1                   2ab96fa55f0ef
	4b0a3970f77ff       b462ce0c8b1ff                                                                                         51 seconds ago      Running             kube-scheduler            1                   58ca84d5ee1f9
	d8d42361d6c78       821b3dfea27be                                                                                         52 seconds ago      Running             kube-controller-manager   1                   47e548c7595e8
	f7e6e4ccf8c6d       73deb9a3f7025                                                                                         52 seconds ago      Running             etcd                      1                   737cdef8c716b
	ea58445474f8a       5c801295c21d0                                                                                         52 seconds ago      Running             kube-apiserver            1                   009b38f39bf1d
	5d0e5a575e7a7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   57d559c84696d
	43aaf5c3bf6ed       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       0                   96a48d1e6808d
	012e610913534       ead0a4a53df89                                                                                         4 minutes ago       Exited              coredns                   0                   d9fcb5b501768
	5486463296b78       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Exited              kindnet-cni               0                   061d1cef513dc
	7791e737cea36       6cdbabde3874e                                                                                         4 minutes ago       Exited              kube-proxy                0                   1e31cfd643be5
	5253cfd31af01       b462ce0c8b1ff                                                                                         4 minutes ago       Exited              kube-scheduler            0                   14cac5d320ea7
	ff41c9b085ade       821b3dfea27be                                                                                         4 minutes ago       Exited              kube-controller-manager   0                   a0de152dc98db
	c0587efa38dbd       73deb9a3f7025                                                                                         4 minutes ago       Exited              etcd                      0                   e113d197f01ff
	3627cce96a103       5c801295c21d0                                                                                         4 minutes ago       Exited              kube-apiserver            0                   7fabc68ca2332
	
	* 
	* ==> coredns [012e61091353] <==
	* [INFO] 10.244.0.3:56404 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827513s
	[INFO] 10.244.0.3:39909 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058733s
	[INFO] 10.244.0.3:40512 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090095s
	[INFO] 10.244.0.3:60406 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000873796s
	[INFO] 10.244.0.3:57880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000029978s
	[INFO] 10.244.0.3:34549 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000022714s
	[INFO] 10.244.0.3:38341 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000026033s
	[INFO] 10.244.1.2:39841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228291s
	[INFO] 10.244.1.2:51650 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181493s
	[INFO] 10.244.1.2:51468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191351s
	[INFO] 10.244.1.2:42384 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131748s
	[INFO] 10.244.0.3:39782 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000285286s
	[INFO] 10.244.0.3:34979 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007029s
	[INFO] 10.244.0.3:33076 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041543s
	[INFO] 10.244.0.3:52995 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035864s
	[INFO] 10.244.1.2:51087 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160281s
	[INFO] 10.244.1.2:35395 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000230629s
	[INFO] 10.244.1.2:51952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000164424s
	[INFO] 10.244.1.2:41607 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178887s
	[INFO] 10.244.0.3:54371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158074s
	[INFO] 10.244.0.3:36708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133949s
	[INFO] 10.244.0.3:33324 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010055s
	[INFO] 10.244.0.3:60814 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006458s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [0fd05c38ac07] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47641 - 55285 "HINFO IN 6364648132792803096.1436698189111927659. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069508904s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-348977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fcf473f700c1ee60c8afd1005162a3d3f02aa75
	                    minikube.k8s.io/name=multinode-348977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T18_38_06_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 18:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348977
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 18:42:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:38:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:38:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:38:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 18:42:15 +0000   Tue, 12 Sep 2023 18:42:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    multinode-348977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 16345d08910a4f1386ba34dab54a7536
	  System UUID:                16345d08-910a-4f13-86ba-34dab54a7536
	  Boot ID:                    95f721a1-a2ae-4351-ac7c-daa64c367fe3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lzrq4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-5dd5756b68-bsdfd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m39s
	  kube-system                 etcd-multinode-348977                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-xs7zp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m39s
	  kube-system                 kube-apiserver-multinode-348977             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-controller-manager-multinode-348977    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-proxy-gp457                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-multinode-348977             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 4m37s               kube-proxy       
	  Normal  Starting                 45s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  5m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 5m)  kubelet          Node multinode-348977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 5m)  kubelet          Node multinode-348977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 5m)  kubelet          Node multinode-348977 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m51s               kubelet          Node multinode-348977 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m51s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m51s               kubelet          Node multinode-348977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s               kubelet          Node multinode-348977 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m51s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m40s               node-controller  Node multinode-348977 event: Registered Node multinode-348977 in Controller
	  Normal  NodeReady                4m27s               kubelet          Node multinode-348977 status is now: NodeReady
	  Normal  Starting                 53s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)   kubelet          Node multinode-348977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)   kubelet          Node multinode-348977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x7 over 53s)   kubelet          Node multinode-348977 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  53s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36s                 node-controller  Node multinode-348977 event: Registered Node multinode-348977 in Controller
	
	
	Name:               multinode-348977-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348977-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 18:39:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348977-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 18:40:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 18:39:35 +0000   Tue, 12 Sep 2023 18:39:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-348977-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e87658c5a254c91b54a24e8fcd285d9
	  System UUID:                3e87658c-5a25-4c91-b54a-24e8fcd285d9
	  Boot ID:                    d5722b21-541e-4bbb-875e-ed8e4fe5010b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-k9v4h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kindnet-rzmdg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-proxy-2wfpr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m51s (x5 over 3m53s)  kubelet          Node multinode-348977-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x5 over 3m53s)  kubelet          Node multinode-348977-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x5 over 3m53s)  kubelet          Node multinode-348977-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m50s                  node-controller  Node multinode-348977-m02 event: Registered Node multinode-348977-m02 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node multinode-348977-m02 status is now: NodeReady
	  Normal  RegisteredNode           36s                    node-controller  Node multinode-348977-m02 event: Registered Node multinode-348977-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep12 18:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070750] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.292632] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.291624] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133732] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.499881] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.983408] systemd-fstab-generator[496]: Ignoring "noauto" for root device
	[  +0.109705] systemd-fstab-generator[507]: Ignoring "noauto" for root device
	[  +1.130579] systemd-fstab-generator[734]: Ignoring "noauto" for root device
	[  +0.275077] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +0.107373] systemd-fstab-generator[783]: Ignoring "noauto" for root device
	[  +0.118134] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +1.545086] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.114032] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.102407] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.102958] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.126032] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[Sep12 18:42] systemd-fstab-generator[1261]: Ignoring "noauto" for root device
	[  +0.393432] kauditd_printk_skb: 67 callbacks suppressed
	[ +17.604251] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [c0587efa38db] <==
	* {"level":"info","ts":"2023-09-12T18:37:59.755156Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T18:37:59.755342Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T18:37:59.756647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T18:37:59.764436Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T18:37:59.767946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T18:39:03.737137Z","caller":"traceutil/trace.go:171","msg":"trace[1521857685] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"173.2868ms","start":"2023-09-12T18:39:03.563812Z","end":"2023-09-12T18:39:03.737099Z","steps":["trace[1521857685] 'process raft request'  (duration: 110.524525ms)","trace[1521857685] 'compare'  (duration: 62.397452ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:39:04.080004Z","caller":"traceutil/trace.go:171","msg":"trace[1993585711] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"336.233237ms","start":"2023-09-12T18:39:03.743757Z","end":"2023-09-12T18:39:04.07999Z","steps":["trace[1993585711] 'process raft request'  (duration: 272.765002ms)","trace[1993585711] 'compare'  (duration: 63.297167ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T18:39:04.081198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-12T18:39:03.743742Z","time spent":"336.779834ms","remote":"127.0.0.1:53564","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2350,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-7q85d\" mod_revision:479 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-7q85d\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-7q85d\" > >"}
	{"level":"info","ts":"2023-09-12T18:39:04.08013Z","caller":"traceutil/trace.go:171","msg":"trace[852958403] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"260.097444ms","start":"2023-09-12T18:39:03.820021Z","end":"2023-09-12T18:39:04.080119Z","steps":["trace[852958403] 'process raft request'  (duration: 259.900086ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:39:59.246506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.898009ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6276762833927192222 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-348977-m03.17843ac9faa7f269\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-348977-m03.17843ac9faa7f269\" value_size:642 lease:6276762833927191776 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-09-12T18:39:59.246917Z","caller":"traceutil/trace.go:171","msg":"trace[1350052669] linearizableReadLoop","detail":"{readStateIndex:649; appliedIndex:646; }","duration":"219.29457ms","start":"2023-09-12T18:39:59.027585Z","end":"2023-09-12T18:39:59.24688Z","steps":["trace[1350052669] 'read index received'  (duration: 21.629106ms)","trace[1350052669] 'applied index is now lower than readState.Index'  (duration: 197.66468ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:39:59.247057Z","caller":"traceutil/trace.go:171","msg":"trace[1628836798] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"270.154541ms","start":"2023-09-12T18:39:58.976886Z","end":"2023-09-12T18:39:59.247041Z","steps":["trace[1628836798] 'process raft request'  (duration: 72.318815ms)","trace[1628836798] 'compare'  (duration: 196.668625ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:39:59.247334Z","caller":"traceutil/trace.go:171","msg":"trace[1411681711] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"223.684554ms","start":"2023-09-12T18:39:59.02364Z","end":"2023-09-12T18:39:59.247324Z","steps":["trace[1411681711] 'process raft request'  (duration: 223.155933ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:39:59.250707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.171347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-348977-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-12T18:39:59.251217Z","caller":"traceutil/trace.go:171","msg":"trace[1151974734] range","detail":"{range_begin:/registry/csinodes/multinode-348977-m03; range_end:; response_count:0; response_revision:612; }","duration":"223.687646ms","start":"2023-09-12T18:39:59.027508Z","end":"2023-09-12T18:39:59.251196Z","steps":["trace[1151974734] 'agreement among raft nodes before linearized reading'  (duration: 223.094864ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-12T18:40:58.411708Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-12T18:40:58.411837Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-348977","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2023-09-12T18:40:58.412086Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T18:40:58.412209Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T18:40:58.456342Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T18:40:58.456474Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-12T18:40:58.456764Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2023-09-12T18:40:58.460525Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:40:58.460668Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:40:58.460688Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-348977","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	* 
	* ==> etcd [f7e6e4ccf8c6] <==
	* {"level":"info","ts":"2023-09-12T18:42:05.634482Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2023-09-12T18:42:05.63468Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T18:42:05.634749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T18:42:05.639015Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-12T18:42:05.639265Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T18:42:05.639317Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-12T18:42:05.639717Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T18:42:05.639901Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T18:42:05.639909Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T18:42:05.640234Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:42:05.640271Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2023-09-12T18:42:06.608994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-12T18:42:06.609059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-12T18:42:06.609075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2023-09-12T18:42:06.609087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.609092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.609186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.609196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2023-09-12T18:42:06.611463Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:multinode-348977 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T18:42:06.611714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T18:42:06.613046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T18:42:06.613138Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T18:42:06.61399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T18:42:06.614123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T18:42:06.614836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	
	* 
	* ==> kernel <==
	*  18:42:56 up 1 min,  0 users,  load average: 0.44, 0.18, 0.06
	Linux multinode-348977 5.10.57 #1 SMP Thu Sep 7 15:04:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [5486463296b7] <==
	* I0912 18:40:16.108139       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:16.108176       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.2.0/24] 
	I0912 18:40:26.179114       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:26.179155       1 main.go:227] handling current node
	I0912 18:40:26.179168       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:26.179183       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:26.179528       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:26.179545       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.2.0/24] 
	I0912 18:40:36.192286       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:36.192332       1 main.go:227] handling current node
	I0912 18:40:36.192342       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:36.192348       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:36.192561       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:36.192570       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.2.0/24] 
	I0912 18:40:46.206291       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:46.206722       1 main.go:227] handling current node
	I0912 18:40:46.206890       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:46.206975       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:56.221569       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:40:56.221992       1 main.go:227] handling current node
	I0912 18:40:56.222238       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:40:56.222363       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:40:56.222810       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:40:56.222867       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:40:56.223015       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.76 Flags: [] Table: 0} 
	
	* 
	* ==> kindnet [c71d7c92a063] <==
	* I0912 18:42:14.547468       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.76 Flags: [] Table: 0} 
	I0912 18:42:24.560514       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:24.560568       1 main.go:227] handling current node
	I0912 18:42:24.560584       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:24.560590       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:24.560688       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:24.560692       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:42:34.572368       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:34.572426       1 main.go:227] handling current node
	I0912 18:42:34.572443       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:34.572450       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:34.572739       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:34.572749       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:42:44.585177       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:44.585239       1 main.go:227] handling current node
	I0912 18:42:44.585255       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:44.585264       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:44.595162       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:44.595221       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	I0912 18:42:54.609226       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0912 18:42:54.609391       1 main.go:227] handling current node
	I0912 18:42:54.609419       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0912 18:42:54.609565       1 main.go:250] Node multinode-348977-m02 has CIDR [10.244.1.0/24] 
	I0912 18:42:54.609902       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0912 18:42:54.610215       1 main.go:250] Node multinode-348977-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [3627cce96a10] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 18:41:08.216377       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 18:41:08.304609       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 18:41:08.374873       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [ea58445474f8] <==
	* I0912 18:42:08.024194       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0912 18:42:08.024731       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0912 18:42:08.024860       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0912 18:42:08.086794       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 18:42:08.125110       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0912 18:42:08.125874       1 aggregator.go:166] initial CRD sync complete...
	I0912 18:42:08.125917       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 18:42:08.125979       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 18:42:08.125987       1 cache.go:39] Caches are synced for autoregister controller
	I0912 18:42:08.165078       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 18:42:08.165739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 18:42:08.168667       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0912 18:42:08.170197       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0912 18:42:08.170238       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0912 18:42:08.171808       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0912 18:42:08.172547       1 shared_informer.go:318] Caches are synced for configmaps
	I0912 18:42:08.991751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0912 18:42:09.405255       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.209]
	I0912 18:42:09.407042       1 controller.go:624] quota admission added evaluator for: endpoints
	I0912 18:42:09.413668       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 18:42:11.068168       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0912 18:42:11.342916       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 18:42:11.354669       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 18:42:11.433098       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 18:42:11.443027       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [d8d42361d6c7] <==
	* I0912 18:42:20.629915       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0912 18:42:20.631450       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0912 18:42:20.632435       1 shared_informer.go:318] Caches are synced for persistent volume
	I0912 18:42:20.632450       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0912 18:42:20.636531       1 shared_informer.go:318] Caches are synced for ephemeral
	I0912 18:42:20.707628       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0912 18:42:20.719689       1 shared_informer.go:318] Caches are synced for disruption
	I0912 18:42:20.729750       1 shared_informer.go:318] Caches are synced for deployment
	I0912 18:42:20.735143       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 18:42:20.783145       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0912 18:42:20.802339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.506268ms"
	I0912 18:42:20.802460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.515µs"
	I0912 18:42:20.804054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.54866ms"
	I0912 18:42:20.805308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.671µs"
	I0912 18:42:20.817385       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 18:42:21.179187       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 18:42:21.190568       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 18:42:21.190666       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0912 18:42:26.981846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.908539ms"
	I0912 18:42:26.982359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="259.719µs"
	I0912 18:42:27.008429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.092µs"
	I0912 18:42:27.046176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.943866ms"
	I0912 18:42:27.054742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="374.084µs"
	I0912 18:42:55.255407       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:42:55.609884       1 event.go:307] "Event occurred" object="multinode-348977-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-348977-m03 event: Removing Node multinode-348977-m03 from Controller"
	
	* 
	* ==> kube-controller-manager [ff41c9b085ad] <==
	* I0912 18:39:20.441280       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:39:23.006486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0912 18:39:23.025222       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-k9v4h"
	I0912 18:39:23.044852       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lzrq4"
	I0912 18:39:23.064705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.136161ms"
	I0912 18:39:23.087255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.383929ms"
	I0912 18:39:23.113908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.573207ms"
	I0912 18:39:23.114329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.008µs"
	I0912 18:39:24.882805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.533382ms"
	I0912 18:39:24.883305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.901µs"
	I0912 18:39:25.065646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.568569ms"
	I0912 18:39:25.065989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.382µs"
	I0912 18:39:59.249887       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348977-m03\" does not exist"
	I0912 18:39:59.249943       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:39:59.291166       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vw7cg"
	I0912 18:39:59.301329       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-348977-m03" podCIDRs=["10.244.2.0/24"]
	I0912 18:39:59.301879       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fvnqz"
	I0912 18:40:01.820031       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-348977-m03"
	I0912 18:40:01.820299       1 event.go:307] "Event occurred" object="multinode-348977-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-348977-m03 event: Registered Node multinode-348977-m03 in Controller"
	I0912 18:40:11.668310       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:40:45.930947       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:40:46.792815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348977-m03\" does not exist"
	I0912 18:40:46.793262       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	I0912 18:40:46.803362       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-348977-m03" podCIDRs=["10.244.3.0/24"]
	I0912 18:40:55.004052       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-348977-m02"
	
	* 
	* ==> kube-proxy [7791e737cea3] <==
	* I0912 18:38:18.983165       1 server_others.go:69] "Using iptables proxy"
	I0912 18:38:19.001200       1 node.go:141] Successfully retrieved node IP: 192.168.39.209
	I0912 18:38:19.068176       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 18:38:19.068229       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 18:38:19.071127       1 server_others.go:152] "Using iptables Proxier"
	I0912 18:38:19.071197       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 18:38:19.071594       1 server.go:846] "Version info" version="v1.28.1"
	I0912 18:38:19.071606       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:38:19.072515       1 config.go:188] "Starting service config controller"
	I0912 18:38:19.072564       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 18:38:19.072591       1 config.go:97] "Starting endpoint slice config controller"
	I0912 18:38:19.072595       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 18:38:19.073105       1 config.go:315] "Starting node config controller"
	I0912 18:38:19.073145       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 18:38:19.174208       1 shared_informer.go:318] Caches are synced for service config
	I0912 18:38:19.174214       1 shared_informer.go:318] Caches are synced for node config
	I0912 18:38:19.174326       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [8983d7a34d7e] <==
	* I0912 18:42:10.810452       1 server_others.go:69] "Using iptables proxy"
	I0912 18:42:10.831289       1 node.go:141] Successfully retrieved node IP: 192.168.39.209
	I0912 18:42:10.920576       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 18:42:10.920631       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 18:42:10.923798       1 server_others.go:152] "Using iptables Proxier"
	I0912 18:42:10.923877       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 18:42:10.926195       1 server.go:846] "Version info" version="v1.28.1"
	I0912 18:42:10.926239       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:42:10.927551       1 config.go:188] "Starting service config controller"
	I0912 18:42:10.927760       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 18:42:10.927788       1 config.go:97] "Starting endpoint slice config controller"
	I0912 18:42:10.927793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 18:42:10.929530       1 config.go:315] "Starting node config controller"
	I0912 18:42:10.929541       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 18:42:11.029917       1 shared_informer.go:318] Caches are synced for node config
	I0912 18:42:11.030034       1 shared_informer.go:318] Caches are synced for service config
	I0912 18:42:11.030058       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [4b0a3970f77f] <==
	* I0912 18:42:06.171781       1 serving.go:348] Generated self-signed cert in-memory
	W0912 18:42:08.058711       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 18:42:08.058759       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 18:42:08.058777       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 18:42:08.058783       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 18:42:08.097758       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0912 18:42:08.097814       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:42:08.101187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 18:42:08.101345       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0912 18:42:08.101692       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 18:42:08.102315       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0912 18:42:08.202244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [5253cfd31af0] <==
	* W0912 18:38:02.049647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 18:38:02.049757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 18:38:02.049872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 18:38:02.049883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 18:38:02.057213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 18:38:02.057263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 18:38:02.894816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 18:38:02.894869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 18:38:02.918705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 18:38:02.918805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 18:38:03.008018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 18:38:03.008070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 18:38:03.013545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 18:38:03.013592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 18:38:03.055628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 18:38:03.055654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 18:38:03.091601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 18:38:03.091889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0912 18:38:03.315538       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 18:38:03.315564       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0912 18:38:06.105760       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 18:40:58.389016       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0912 18:40:58.389137       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0912 18:40:58.390065       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0912 18:40:58.390365       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-12 18:41:37 UTC, ends at Tue 2023-09-12 18:42:57 UTC. --
	Sep 12 18:42:09 multinode-348977 kubelet[1267]: I0912 18:42:09.823198    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab96fa55f0ef28f704c9d1745b1c48a4be93c094ea1cf741253b69536744b49"
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.709075    1267 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711261    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume podName:b14b1b22-9cc1-44da-bab6-32ec6c417f9a nodeName:}" failed. No retries permitted until 2023-09-12 18:42:12.711142949 +0000 UTC m=+9.960675589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume") pod "coredns-5dd5756b68-bsdfd" (UID: "b14b1b22-9cc1-44da-bab6-32ec6c417f9a") : object "kube-system"/"coredns" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711708    1267 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711731    1267 projected.go:198] Error preparing data for projected volume kube-api-access-fth6t for pod default/busybox-5bc68d56bd-lzrq4: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:10 multinode-348977 kubelet[1267]: E0912 18:42:10.711773    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t podName:e821b198-38ff-4455-9acb-74f6774ee805 nodeName:}" failed. No retries permitted until 2023-09-12 18:42:12.71176182 +0000 UTC m=+9.961294459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fth6t" (UniqueName: "kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t") pod "busybox-5bc68d56bd-lzrq4" (UID: "e821b198-38ff-4455-9acb-74f6774ee805") : object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727304    1267 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727362    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume podName:b14b1b22-9cc1-44da-bab6-32ec6c417f9a nodeName:}" failed. No retries permitted until 2023-09-12 18:42:16.727348407 +0000 UTC m=+13.976881032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume") pod "coredns-5dd5756b68-bsdfd" (UID: "b14b1b22-9cc1-44da-bab6-32ec6c417f9a") : object "kube-system"/"coredns" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727689    1267 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727709    1267 projected.go:198] Error preparing data for projected volume kube-api-access-fth6t for pod default/busybox-5bc68d56bd-lzrq4: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:12 multinode-348977 kubelet[1267]: E0912 18:42:12.727755    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t podName:e821b198-38ff-4455-9acb-74f6774ee805 nodeName:}" failed. No retries permitted until 2023-09-12 18:42:16.727741796 +0000 UTC m=+13.977274421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fth6t" (UniqueName: "kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t") pod "busybox-5bc68d56bd-lzrq4" (UID: "e821b198-38ff-4455-9acb-74f6774ee805") : object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: I0912 18:42:13.310253    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ec8e106fac65fe861579e38e010f99847ddca3d0f37581a183693b9bcd14d4"
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: I0912 18:42:13.348176    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ec6eea906d3f346783e33b09260ff2422a9e5aa9b4883c67c9173939765553"
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: E0912 18:42:13.348197    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-bsdfd" podUID="b14b1b22-9cc1-44da-bab6-32ec6c417f9a"
	Sep 12 18:42:13 multinode-348977 kubelet[1267]: E0912 18:42:13.351638    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-lzrq4" podUID="e821b198-38ff-4455-9acb-74f6774ee805"
	Sep 12 18:42:15 multinode-348977 kubelet[1267]: E0912 18:42:15.083415    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-bsdfd" podUID="b14b1b22-9cc1-44da-bab6-32ec6c417f9a"
	Sep 12 18:42:15 multinode-348977 kubelet[1267]: E0912 18:42:15.083593    1267 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-lzrq4" podUID="e821b198-38ff-4455-9acb-74f6774ee805"
	Sep 12 18:42:15 multinode-348977 kubelet[1267]: I0912 18:42:15.420450    1267 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.762853    1267 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.763020    1267 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.763745    1267 projected.go:198] Error preparing data for projected volume kube-api-access-fth6t for pod default/busybox-5bc68d56bd-lzrq4: object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.763650    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume podName:b14b1b22-9cc1-44da-bab6-32ec6c417f9a nodeName:}" failed. No retries permitted until 2023-09-12 18:42:24.763630726 +0000 UTC m=+22.013163364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b14b1b22-9cc1-44da-bab6-32ec6c417f9a-config-volume") pod "coredns-5dd5756b68-bsdfd" (UID: "b14b1b22-9cc1-44da-bab6-32ec6c417f9a") : object "kube-system"/"coredns" not registered
	Sep 12 18:42:16 multinode-348977 kubelet[1267]: E0912 18:42:16.764171    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t podName:e821b198-38ff-4455-9acb-74f6774ee805 nodeName:}" failed. No retries permitted until 2023-09-12 18:42:24.764155206 +0000 UTC m=+22.013687834 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fth6t" (UniqueName: "kubernetes.io/projected/e821b198-38ff-4455-9acb-74f6774ee805-kube-api-access-fth6t") pod "busybox-5bc68d56bd-lzrq4" (UID: "e821b198-38ff-4455-9acb-74f6774ee805") : object "default"/"kube-root-ca.crt" not registered
	Sep 12 18:42:25 multinode-348977 kubelet[1267]: I0912 18:42:25.769672    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3688028545c3998c5f75e4d5e6621c4c5a0e73bbe71c9395d6387d5b29ed167d"
	Sep 12 18:42:25 multinode-348977 kubelet[1267]: I0912 18:42:25.936560    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df497b51c1b90fe37ef8ae9f7ebb72d0151d57610fa49d4fe1fd419f9ce2ef4"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-348977 -n multinode-348977
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-348977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (3.08s)

                                                
                                    

Test pass (284/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.14
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.1/json-events 5.72
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.53
20 TestOffline 72.74
22 TestAddons/Setup 151.59
24 TestAddons/parallel/Registry 14.89
25 TestAddons/parallel/Ingress 28.28
26 TestAddons/parallel/InspektorGadget 11.04
27 TestAddons/parallel/MetricsServer 5.94
28 TestAddons/parallel/HelmTiller 13.52
30 TestAddons/parallel/CSI 47.13
31 TestAddons/parallel/Headlamp 15.47
32 TestAddons/parallel/CloudSpanner 6.03
35 TestAddons/serial/GCPAuth/Namespaces 0.14
36 TestAddons/StoppedEnableDisable 13.34
37 TestCertOptions 82.36
38 TestCertExpiration 290.54
39 TestDockerFlags 95.67
40 TestForceSystemdFlag 104.38
41 TestForceSystemdEnv 110.18
43 TestKVMDriverInstallOrUpdate 3.11
47 TestErrorSpam/setup 52.35
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.76
50 TestErrorSpam/pause 1.17
51 TestErrorSpam/unpause 1.26
52 TestErrorSpam/stop 3.52
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 65.13
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 40.9
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.44
64 TestFunctional/serial/CacheCmd/cache/add_local 0.99
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.16
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 41.5
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.05
75 TestFunctional/serial/LogsFileCmd 1.13
76 TestFunctional/serial/InvalidService 4.5
78 TestFunctional/parallel/ConfigCmd 0.29
79 TestFunctional/parallel/DashboardCmd 42.9
80 TestFunctional/parallel/DryRun 0.26
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 1.15
86 TestFunctional/parallel/ServiceCmdConnect 12.58
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 59.3
90 TestFunctional/parallel/SSHCmd 0.4
91 TestFunctional/parallel/CpCmd 0.92
92 TestFunctional/parallel/MySQL 39.06
93 TestFunctional/parallel/FileSync 0.32
94 TestFunctional/parallel/CertSync 1.31
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
102 TestFunctional/parallel/License 0.16
103 TestFunctional/parallel/ServiceCmd/DeployApp 14.24
104 TestFunctional/parallel/Version/short 0.04
105 TestFunctional/parallel/Version/components 0.97
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
110 TestFunctional/parallel/ImageCommands/ImageBuild 2.6
111 TestFunctional/parallel/ImageCommands/Setup 0.87
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.54
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.43
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.85
115 TestFunctional/parallel/ServiceCmd/List 0.26
116 TestFunctional/parallel/DockerEnv/bash 1.2
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
119 TestFunctional/parallel/ServiceCmd/Format 0.35
120 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.67
121 TestFunctional/parallel/ServiceCmd/URL 0.4
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.86
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
136 TestFunctional/parallel/ProfileCmd/profile_list 0.3
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
138 TestFunctional/parallel/MountCmd/any-port 26.48
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.83
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.28
141 TestFunctional/parallel/MountCmd/specific-port 2.15
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
143 TestFunctional/delete_addon-resizer_images 0.07
144 TestFunctional/delete_my-image_image 0.01
145 TestFunctional/delete_minikube_cached_images 0.01
146 TestGvisorAddon 395.96
149 TestImageBuild/serial/Setup 51.06
150 TestImageBuild/serial/NormalBuild 1.1
151 TestImageBuild/serial/BuildWithBuildArg 1.22
152 TestImageBuild/serial/BuildWithDockerIgnore 0.36
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.28
156 TestIngressAddonLegacy/StartLegacyK8sCluster 109.11
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.92
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 33.18
163 TestJSONOutput/start/Command 64
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.55
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.52
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 8.09
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.18
191 TestMainNoArgs 0.04
192 TestMinikubeProfile 104.92
195 TestMountStart/serial/StartWithMountFirst 29.5
196 TestMountStart/serial/VerifyMountFirst 0.35
197 TestMountStart/serial/StartWithMountSecond 29.28
198 TestMountStart/serial/VerifyMountSecond 0.36
199 TestMountStart/serial/DeleteFirst 0.86
200 TestMountStart/serial/VerifyMountPostDelete 0.36
201 TestMountStart/serial/Stop 2.07
202 TestMountStart/serial/RestartStopped 24.39
203 TestMountStart/serial/VerifyMountPostStop 0.36
206 TestMultiNode/serial/FreshStart2Nodes 127.12
207 TestMultiNode/serial/DeployApp2Nodes 3.79
208 TestMultiNode/serial/PingHostFrom2Pods 0.81
209 TestMultiNode/serial/AddNode 46.82
210 TestMultiNode/serial/ProfileList 0.2
211 TestMultiNode/serial/CopyFile 7.14
212 TestMultiNode/serial/StopNode 3.91
213 TestMultiNode/serial/StartAfterStop 32.4
216 TestMultiNode/serial/StopMultiNode 112.26
217 TestMultiNode/serial/RestartMultiNode 111.01
218 TestMultiNode/serial/ValidateNameConflict 50.64
223 TestPreload 169.15
225 TestScheduledStopUnix 123.39
226 TestSkaffold 139.66
229 TestRunningBinaryUpgrade 190.84
231 TestKubernetesUpgrade 283.11
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
242 TestNoKubernetes/serial/StartWithK8s 63.47
244 TestPause/serial/Start 93.23
245 TestNoKubernetes/serial/StartWithStopK8s 25.77
246 TestNoKubernetes/serial/Start 30.07
247 TestPause/serial/SecondStartNoReconfiguration 57.72
248 TestStoppedBinaryUpgrade/Setup 0.47
249 TestStoppedBinaryUpgrade/Upgrade 216.23
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
251 TestNoKubernetes/serial/ProfileList 0.99
252 TestNoKubernetes/serial/Stop 2.09
253 TestNoKubernetes/serial/StartNoArgs 25.48
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
255 TestPause/serial/Pause 1.43
256 TestPause/serial/VerifyStatus 0.28
257 TestPause/serial/Unpause 0.96
258 TestPause/serial/PauseAgain 0.64
259 TestPause/serial/DeletePaused 1.13
260 TestPause/serial/VerifyDeletedResources 16.12
273 TestStartStop/group/old-k8s-version/serial/FirstStart 144.94
275 TestStartStop/group/no-preload/serial/FirstStart 127.99
276 TestStartStop/group/no-preload/serial/DeployApp 10.81
277 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.27
278 TestStartStop/group/no-preload/serial/Stop 13.1
279 TestStartStop/group/old-k8s-version/serial/DeployApp 9.58
280 TestStoppedBinaryUpgrade/MinikubeLogs 2.09
282 TestStartStop/group/embed-certs/serial/FirstStart 73.74
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
284 TestStartStop/group/no-preload/serial/SecondStart 334.41
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
286 TestStartStop/group/old-k8s-version/serial/Stop 13.2
288 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 122.78
289 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
290 TestStartStop/group/old-k8s-version/serial/SecondStart 500.93
291 TestStartStop/group/embed-certs/serial/DeployApp 8.45
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
293 TestStartStop/group/embed-certs/serial/Stop 13.14
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
295 TestStartStop/group/embed-certs/serial/SecondStart 344.96
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.11
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 334.05
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
304 TestStartStop/group/no-preload/serial/Pause 2.56
306 TestStartStop/group/newest-cni/serial/FirstStart 72.79
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
309 TestStartStop/group/newest-cni/serial/Stop 8.11
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/newest-cni/serial/SecondStart 50.84
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.02
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
315 TestStartStop/group/embed-certs/serial/Pause 2.86
316 TestNetworkPlugins/group/auto/Start 110.06
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/newest-cni/serial/Pause 2.56
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 21.03
322 TestNetworkPlugins/group/kindnet/Start 92.31
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
326 TestNetworkPlugins/group/calico/Start 110.4
327 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
328 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
329 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
330 TestStartStop/group/old-k8s-version/serial/Pause 4.52
331 TestNetworkPlugins/group/custom-flannel/Start 101.35
332 TestNetworkPlugins/group/auto/KubeletFlags 0.23
333 TestNetworkPlugins/group/auto/NetCatPod 14.47
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
335 TestNetworkPlugins/group/kindnet/KubeletFlags 0.53
336 TestNetworkPlugins/group/kindnet/NetCatPod 13.47
337 TestNetworkPlugins/group/auto/DNS 0.33
338 TestNetworkPlugins/group/auto/Localhost 0.26
339 TestNetworkPlugins/group/auto/HairPin 0.2
340 TestNetworkPlugins/group/kindnet/DNS 0.27
341 TestNetworkPlugins/group/kindnet/Localhost 0.23
342 TestNetworkPlugins/group/kindnet/HairPin 0.23
343 TestNetworkPlugins/group/false/Start 84.26
344 TestNetworkPlugins/group/flannel/Start 107.83
345 TestNetworkPlugins/group/calico/ControllerPod 5.03
346 TestNetworkPlugins/group/calico/KubeletFlags 0.19
347 TestNetworkPlugins/group/calico/NetCatPod 12.41
348 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
349 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.43
350 TestNetworkPlugins/group/calico/DNS 0.34
351 TestNetworkPlugins/group/calico/Localhost 0.16
352 TestNetworkPlugins/group/calico/HairPin 0.19
353 TestNetworkPlugins/group/custom-flannel/DNS 0.22
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
356 TestNetworkPlugins/group/bridge/Start 83.08
357 TestNetworkPlugins/group/kubenet/Start 108.23
358 TestNetworkPlugins/group/false/KubeletFlags 0.2
359 TestNetworkPlugins/group/false/NetCatPod 12.43
360 TestNetworkPlugins/group/false/DNS 0.3
361 TestNetworkPlugins/group/false/Localhost 0.22
362 TestNetworkPlugins/group/false/HairPin 0.23
363 TestNetworkPlugins/group/enable-default-cni/Start 114.74
364 TestNetworkPlugins/group/flannel/ControllerPod 5.03
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
366 TestNetworkPlugins/group/flannel/NetCatPod 16.92
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
368 TestNetworkPlugins/group/bridge/NetCatPod 12.4
369 TestNetworkPlugins/group/flannel/DNS 0.22
370 TestNetworkPlugins/group/flannel/Localhost 0.21
371 TestNetworkPlugins/group/flannel/HairPin 0.19
372 TestNetworkPlugins/group/bridge/DNS 0.21
373 TestNetworkPlugins/group/bridge/Localhost 0.18
374 TestNetworkPlugins/group/bridge/HairPin 0.19
375 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
376 TestNetworkPlugins/group/kubenet/NetCatPod 13.47
377 TestNetworkPlugins/group/kubenet/DNS 0.21
378 TestNetworkPlugins/group/kubenet/Localhost 0.17
379 TestNetworkPlugins/group/kubenet/HairPin 0.16
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-684390 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-684390 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (8.143522215s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-684390
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-684390: exit status 85 (53.969485ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-684390 | jenkins | v1.31.2 | 12 Sep 23 18:20 UTC |          |
	|         | -p download-only-684390        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:20:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:20:19.310179   10860 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:20:19.310291   10860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:20:19.310300   10860 out.go:309] Setting ErrFile to fd 2...
	I0912 18:20:19.310305   10860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:20:19.310501   10860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	W0912 18:20:19.310637   10860 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17233-3674/.minikube/config/config.json: open /home/jenkins/minikube-integration/17233-3674/.minikube/config/config.json: no such file or directory
	I0912 18:20:19.311236   10860 out.go:303] Setting JSON to true
	I0912 18:20:19.312100   10860 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":169,"bootTime":1694542650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:20:19.312163   10860 start.go:138] virtualization: kvm guest
	I0912 18:20:19.314788   10860 out.go:97] [download-only-684390] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:20:19.316332   10860 out.go:169] MINIKUBE_LOCATION=17233
	W0912 18:20:19.314890   10860 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 18:20:19.314944   10860 notify.go:220] Checking for updates...
	I0912 18:20:19.319171   10860 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:20:19.320691   10860 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:20:19.322010   10860 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:20:19.323313   10860 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 18:20:19.325874   10860 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 18:20:19.326088   10860 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:20:19.432563   10860 out.go:97] Using the kvm2 driver based on user configuration
	I0912 18:20:19.432596   10860 start.go:298] selected driver: kvm2
	I0912 18:20:19.432609   10860 start.go:902] validating driver "kvm2" against <nil>
	I0912 18:20:19.432940   10860 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:20:19.433087   10860 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3674/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:20:19.447480   10860 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:20:19.447528   10860 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 18:20:19.447972   10860 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0912 18:20:19.448144   10860 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 18:20:19.448200   10860 cni.go:84] Creating CNI manager for ""
	I0912 18:20:19.448219   10860 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 18:20:19.448230   10860 start_flags.go:321] config:
	{Name:download-only-684390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:20:19.448417   10860 iso.go:125] acquiring lock: {Name:mk43b7bcf1553c61ec6315fe7159639653246bdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:20:19.450516   10860 out.go:97] Downloading VM boot image ...
	I0912 18:20:19.450544   10860 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17233-3674/.minikube/cache/iso/amd64/minikube-v1.31.0-1694081706-17207-amd64.iso
	I0912 18:20:21.885294   10860 out.go:97] Starting control plane node download-only-684390 in cluster download-only-684390
	I0912 18:20:21.885326   10860 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 18:20:21.925204   10860 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0912 18:20:21.925233   10860 cache.go:57] Caching tarball of preloaded images
	I0912 18:20:21.925384   10860 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 18:20:21.927462   10860 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0912 18:20:21.927478   10860 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0912 18:20:21.956441   10860 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684390"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (5.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-684390 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-684390 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=kvm2 : (5.71891066s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (5.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-684390
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-684390: exit status 85 (52.247677ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-684390 | jenkins | v1.31.2 | 12 Sep 23 18:20 UTC |          |
	|         | -p download-only-684390        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-684390 | jenkins | v1.31.2 | 12 Sep 23 18:20 UTC |          |
	|         | -p download-only-684390        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:20:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:20:27.507308   10918 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:20:27.507533   10918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:20:27.507540   10918 out.go:309] Setting ErrFile to fd 2...
	I0912 18:20:27.507545   10918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:20:27.507744   10918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	W0912 18:20:27.507847   10918 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17233-3674/.minikube/config/config.json: open /home/jenkins/minikube-integration/17233-3674/.minikube/config/config.json: no such file or directory
	I0912 18:20:27.508238   10918 out.go:303] Setting JSON to true
	I0912 18:20:27.508978   10918 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":178,"bootTime":1694542650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:20:27.509031   10918 start.go:138] virtualization: kvm guest
	I0912 18:20:27.510958   10918 out.go:97] [download-only-684390] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:20:27.512608   10918 out.go:169] MINIKUBE_LOCATION=17233
	I0912 18:20:27.511112   10918 notify.go:220] Checking for updates...
	I0912 18:20:27.515219   10918 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:20:27.516613   10918 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:20:27.518051   10918 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:20:27.519425   10918 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 18:20:27.522203   10918 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 18:20:27.522636   10918 config.go:182] Loaded profile config "download-only-684390": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0912 18:20:27.522684   10918 start.go:810] api.Load failed for download-only-684390: filestore "download-only-684390": Docker machine "download-only-684390" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0912 18:20:27.522766   10918 driver.go:373] Setting default libvirt URI to qemu:///system
	W0912 18:20:27.522791   10918 start.go:810] api.Load failed for download-only-684390: filestore "download-only-684390": Docker machine "download-only-684390" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0912 18:20:27.553712   10918 out.go:97] Using the kvm2 driver based on existing profile
	I0912 18:20:27.553744   10918 start.go:298] selected driver: kvm2
	I0912 18:20:27.553750   10918 start.go:902] validating driver "kvm2" against &{Name:download-only-684390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:20:27.554169   10918 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:20:27.554245   10918 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3674/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:20:27.568719   10918 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:20:27.569392   10918 cni.go:84] Creating CNI manager for ""
	I0912 18:20:27.569411   10918 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 18:20:27.569438   10918 start_flags.go:321] config:
	{Name:download-only-684390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-684390 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:20:27.569587   10918 iso.go:125] acquiring lock: {Name:mk43b7bcf1553c61ec6315fe7159639653246bdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:20:27.571535   10918 out.go:97] Starting control plane node download-only-684390 in cluster download-only-684390
	I0912 18:20:27.571553   10918 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:20:27.597611   10918 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0912 18:20:27.597643   10918 cache.go:57] Caching tarball of preloaded images
	I0912 18:20:27.597780   10918 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:20:27.599606   10918 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0912 18:20:27.599623   10918 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 18:20:27.633408   10918 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4?checksum=md5:e86539672b8ce9a3040455131c2fbb87 -> /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0912 18:20:31.532478   10918 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 18:20:31.532563   10918 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17233-3674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 18:20:32.392776   10918 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 18:20:32.392907   10918 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/download-only-684390/config.json ...
	I0912 18:20:32.393095   10918 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 18:20:32.393251   10918 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17233-3674/.minikube/cache/linux/amd64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684390"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-684390
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-003148 --alsologtostderr --binary-mirror http://127.0.0.1:36959 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-003148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-003148
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (72.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-829539 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-829539 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m11.741219738s)
helpers_test.go:175: Cleaning up "offline-docker-829539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-829539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-829539: (1.002557963s)
--- PASS: TestOffline (72.74s)

                                                
                                    
x
+
TestAddons/Setup (151.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-494250 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-494250 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.586070359s)
--- PASS: TestAddons/Setup (151.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 28.694143ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-54lnt" [103ee602-6534-4e26-a1fa-91ae2fda3b61] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019543128s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lj72k" [2034fe60-275d-4cc4-a208-d224a955e363] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014208898s
addons_test.go:316: (dbg) Run:  kubectl --context addons-494250 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-494250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-494250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.867847419s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 ip
2023/09/12 18:23:19 [DEBUG] GET http://192.168.39.15:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-494250 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-494250 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-494250 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0752e6a5-e920-4779-81be-65666870fe7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0752e6a5-e920-4779-81be-65666870fe7d] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.022211576s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-494250 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.15
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-494250 addons disable ingress-dns --alsologtostderr -v=1: (1.754325229s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-494250 addons disable ingress --alsologtostderr -v=1: (7.794980592s)
--- PASS: TestAddons/parallel/Ingress (28.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dzvp9" [206abdef-25d0-4a52-9d3f-aa650703b98f] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011579186s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-494250
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-494250: (6.03082326s)
--- PASS: TestAddons/parallel/InspektorGadget (11.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.737358ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-fj242" [7cdbd13e-6254-4bab-be9c-610edc87dcc9] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016427257s
addons_test.go:391: (dbg) Run:  kubectl --context addons-494250 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.94s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 28.411791ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-699d5" [82243dc1-2aa2-4c3b-ac16-582d6ede8d18] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.025466616s
addons_test.go:449: (dbg) Run:  kubectl --context addons-494250 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-494250 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.875559103s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.930195ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-494250 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-494250 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9924ea67-4b37-4052-ae25-e0971a5151cf] Pending
helpers_test.go:344: "task-pv-pod" [9924ea67-4b37-4052-ae25-e0971a5151cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9924ea67-4b37-4052-ae25-e0971a5151cf] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.038613246s
addons_test.go:560: (dbg) Run:  kubectl --context addons-494250 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-494250 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-494250 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-494250 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-494250 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-494250 delete pod task-pv-pod: (1.316721506s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-494250 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-494250 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-494250 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-494250 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0598ab7d-3137-432a-973f-1c15a8e4ca9d] Pending
helpers_test.go:344: "task-pv-pod-restore" [0598ab7d-3137-432a-973f-1c15a8e4ca9d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0598ab7d-3137-432a-973f-1c15a8e4ca9d] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.023306497s
addons_test.go:602: (dbg) Run:  kubectl --context addons-494250 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-494250 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-494250 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-494250 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.69893449s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-494250 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-494250 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-494250 --alsologtostderr -v=1: (1.433593583s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-grx9c" [48c258b1-f2fe-4847-b389-7dbfa2e80fdc] Pending
helpers_test.go:344: "headlamp-699c48fb74-grx9c" [48c258b1-f2fe-4847-b389-7dbfa2e80fdc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-grx9c" [48c258b1-f2fe-4847-b389-7dbfa2e80fdc] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.035627405s
--- PASS: TestAddons/parallel/Headlamp (15.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-gbqjq" [8b6a3081-6cc8-4905-9827-6b28acfe919d] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.020495004s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-494250
--- PASS: TestAddons/parallel/CloudSpanner (6.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-494250 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-494250 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-494250
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-494250: (13.089227744s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-494250
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-494250
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-494250
--- PASS: TestAddons/StoppedEnableDisable (13.34s)

                                                
                                    
x
+
TestCertOptions (82.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-815108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-815108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m20.823886484s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-815108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-815108 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-815108 -- "sudo cat /etc/kubernetes/admin.conf"
E0912 18:59:37.570728   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-815108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-815108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-815108: (1.058680351s)
--- PASS: TestCertOptions (82.36s)

                                                
                                    
x
+
TestCertExpiration (290.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-161295 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0912 18:58:05.715300   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:58:07.910216   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-161295 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m21.107004559s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-161295 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-161295 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (28.342329443s)
helpers_test.go:175: Cleaning up "cert-expiration-161295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-161295
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-161295: (1.093332095s)
--- PASS: TestCertExpiration (290.54s)

                                                
                                    
x
+
TestDockerFlags (95.67s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-964244 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-964244 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m33.88286355s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-964244 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-964244 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-964244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-964244
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-964244: (1.14500879s)
--- PASS: TestDockerFlags (95.67s)

                                                
                                    
x
+
TestForceSystemdFlag (104.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-026747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-026747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m42.90262734s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-026747 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-026747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-026747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-026747: (1.169071238s)
--- PASS: TestForceSystemdFlag (104.38s)

                                                
                                    
x
+
TestForceSystemdEnv (110.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-333579 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-333579 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m48.958584242s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-333579 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-333579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-333579
--- PASS: TestForceSystemdEnv (110.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.11s)

                                                
                                    
x
+
TestErrorSpam/setup (52.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-315509 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-315509 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-315509 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-315509 --driver=kvm2 : (52.347711644s)
--- PASS: TestErrorSpam/setup (52.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 pause
--- PASS: TestErrorSpam/pause (1.17s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 unpause
--- PASS: TestErrorSpam/unpause (1.26s)

                                                
                                    
x
+
TestErrorSpam/stop (3.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 stop: (3.400388646s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-315509 --log_dir /tmp/nospam-315509 stop
--- PASS: TestErrorSpam/stop (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17233-3674/.minikube/files/etc/test/nested/copy/10848/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003989 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-003989 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.131486594s)
--- PASS: TestFunctional/serial/StartWithProxy (65.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003989 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-003989 --alsologtostderr -v=8: (40.896421304s)
functional_test.go:659: soft start took 40.897099981s for "functional-003989" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-003989 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-003989 /tmp/TestFunctionalserialCacheCmdcacheadd_local1298869426/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cache add minikube-local-cache-test:functional-003989
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cache delete minikube-local-cache-test:functional-003989
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-003989
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.891719ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 kubectl -- --context functional-003989 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-003989 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003989 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-003989 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.504582813s)
functional_test.go:757: restart took 41.504726232s for "functional-003989" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-003989 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 logs: (1.04909248s)
--- PASS: TestFunctional/serial/LogsCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 logs --file /tmp/TestFunctionalserialLogsFileCmd2250709156/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 logs --file /tmp/TestFunctionalserialLogsFileCmd2250709156/001/logs.txt: (1.127549488s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-003989 apply -f testdata/invalidsvc.yaml
E0912 18:28:05.716620   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:05.722444   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:05.732755   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:05.753080   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:05.793365   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:05.873715   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:06.034169   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:28:06.354795   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-003989
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-003989: exit status 115 (277.949591ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.43:32021 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-003989 delete -f testdata/invalidsvc.yaml
E0912 18:28:06.995731   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 config get cpus: exit status 14 (46.140178ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 config get cpus: exit status 14 (41.027162ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (42.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-003989 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-003989 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17581: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (42.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-003989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (131.489563ms)

                                                
                                                
-- stdout --
	* [functional-003989] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:28:29.983524   17424 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:28:29.983780   17424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:28:29.983796   17424 out.go:309] Setting ErrFile to fd 2...
	I0912 18:28:29.983803   17424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:28:29.984336   17424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:28:29.985218   17424 out.go:303] Setting JSON to false
	I0912 18:28:29.986432   17424 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":660,"bootTime":1694542650,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:28:29.986491   17424 start.go:138] virtualization: kvm guest
	I0912 18:28:29.988811   17424 out.go:177] * [functional-003989] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:28:29.990773   17424 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:28:29.992186   17424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:28:29.990831   17424 notify.go:220] Checking for updates...
	I0912 18:28:29.993728   17424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:28:29.995215   17424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:28:29.996647   17424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:28:29.998016   17424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:28:29.999941   17424 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:28:30.000613   17424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:28:30.000692   17424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:28:30.015394   17424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0912 18:28:30.015750   17424 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:28:30.016395   17424 main.go:141] libmachine: Using API Version  1
	I0912 18:28:30.016427   17424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:28:30.016772   17424 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:28:30.016927   17424 main.go:141] libmachine: (functional-003989) Calling .DriverName
	I0912 18:28:30.017137   17424 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:28:30.017410   17424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:28:30.017442   17424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:28:30.031481   17424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I0912 18:28:30.031840   17424 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:28:30.032271   17424 main.go:141] libmachine: Using API Version  1
	I0912 18:28:30.032295   17424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:28:30.032577   17424 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:28:30.032773   17424 main.go:141] libmachine: (functional-003989) Calling .DriverName
	I0912 18:28:30.068576   17424 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 18:28:30.070233   17424 start.go:298] selected driver: kvm2
	I0912 18:28:30.070248   17424 start.go:902] validating driver "kvm2" against &{Name:functional-003989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-003989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.43 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:28:30.070373   17424 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:28:30.072860   17424 out.go:177] 
	W0912 18:28:30.074188   17424 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 18:28:30.075524   17424 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003989 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-003989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (129.578493ms)

                                                
                                                
-- stdout --
	* [functional-003989] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:28:30.244359   17480 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:28:30.244587   17480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:28:30.244595   17480 out.go:309] Setting ErrFile to fd 2...
	I0912 18:28:30.244600   17480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:28:30.244822   17480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:28:30.245314   17480 out.go:303] Setting JSON to false
	I0912 18:28:30.246206   17480 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":660,"bootTime":1694542650,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:28:30.246262   17480 start.go:138] virtualization: kvm guest
	I0912 18:28:30.248404   17480 out.go:177] * [functional-003989] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0912 18:28:30.249955   17480 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:28:30.249962   17480 notify.go:220] Checking for updates...
	I0912 18:28:30.251925   17480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:28:30.253535   17480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	I0912 18:28:30.255082   17480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	I0912 18:28:30.256501   17480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:28:30.257929   17480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:28:30.261142   17480 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:28:30.261726   17480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:28:30.261785   17480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:28:30.276111   17480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I0912 18:28:30.276542   17480 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:28:30.277100   17480 main.go:141] libmachine: Using API Version  1
	I0912 18:28:30.277122   17480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:28:30.277421   17480 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:28:30.277587   17480 main.go:141] libmachine: (functional-003989) Calling .DriverName
	I0912 18:28:30.277789   17480 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:28:30.278073   17480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:28:30.278106   17480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:28:30.292374   17480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0912 18:28:30.292714   17480 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:28:30.293142   17480 main.go:141] libmachine: Using API Version  1
	I0912 18:28:30.293165   17480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:28:30.293468   17480 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:28:30.293651   17480 main.go:141] libmachine: (functional-003989) Calling .DriverName
	I0912 18:28:30.327794   17480 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0912 18:28:30.329200   17480 start.go:298] selected driver: kvm2
	I0912 18:28:30.329218   17480 start.go:902] validating driver "kvm2" against &{Name:functional-003989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-003989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.43 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:28:30.329349   17480 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:28:30.331685   17480 out.go:177] 
	W0912 18:28:30.333052   17480 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 18:28:30.334302   17480 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-003989 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-003989 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cjcwg" [3b3f1d17-fbf3-407e-b008-101af2b991e9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cjcwg" [3b3f1d17-fbf3-407e-b008-101af2b991e9] Running
E0912 18:28:15.957732   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.017072038s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.43:30102
functional_test.go:1674: http://192.168.50.43:30102: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cjcwg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.43:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.43:30102
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 addons list -o json
E0912 18:28:08.276266   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (59.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [35e5551f-7420-4d66-8223-5e4329475030] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.035027491s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-003989 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-003989 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-003989 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-003989 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cd0914c6-5806-49e9-85ae-149df9b21423] Pending
helpers_test.go:344: "sp-pod" [cd0914c6-5806-49e9-85ae-149df9b21423] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cd0914c6-5806-49e9-85ae-149df9b21423] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.020011517s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-003989 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-003989 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-003989 delete -f testdata/storage-provisioner/pod.yaml: (2.063938926s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-003989 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [53b67267-6004-4ec4-8e68-63240cabc310] Pending
helpers_test.go:344: "sp-pod" [53b67267-6004-4ec4-8e68-63240cabc310] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0912 18:28:46.679801   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [53b67267-6004-4ec4-8e68-63240cabc310] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 34.016899836s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-003989 exec sp-pod -- ls /tmp/mount
2023/09/12 18:29:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (59.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh -n functional-003989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 cp functional-003989:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2679796494/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh -n functional-003989 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-003989 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pwkfm" [cc03600f-2c0c-422c-b805-bb842bbe987c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pwkfm" [cc03600f-2c0c-422c-b805-bb842bbe987c] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.019677173s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;": exit status 1 (334.112603ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;": exit status 1 (553.149216ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;": exit status 1 (321.09044ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;": exit status 1 (249.315776ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-003989 exec mysql-859648c796-pwkfm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10848/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /etc/test/nested/copy/10848/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10848.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /etc/ssl/certs/10848.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10848.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /usr/share/ca-certificates/10848.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/108482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /etc/ssl/certs/108482.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/108482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /usr/share/ca-certificates/108482.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-003989 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh "sudo systemctl is-active crio": exit status 1 (215.518114ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-003989 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-003989 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-d8h7z" [038431e3-9f4e-47c1-8be6-84536be079b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-d8h7z" [038431e3-9f4e-47c1-8be6-84536be079b6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.021644863s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003989 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-003989
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-003989
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003989 image ls --format short --alsologtostderr:
I0912 18:28:57.368841   18209 out.go:296] Setting OutFile to fd 1 ...
I0912 18:28:57.369145   18209 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:28:57.369160   18209 out.go:309] Setting ErrFile to fd 2...
I0912 18:28:57.369167   18209 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:28:57.369420   18209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
I0912 18:28:57.370189   18209 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:28:57.370341   18209 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:28:57.370895   18209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:28:57.370967   18209 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:28:57.385879   18209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
I0912 18:28:57.386424   18209 main.go:141] libmachine: () Calling .GetVersion
I0912 18:28:57.387095   18209 main.go:141] libmachine: Using API Version  1
I0912 18:28:57.387122   18209 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:28:57.387510   18209 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:28:57.387690   18209 main.go:141] libmachine: (functional-003989) Calling .GetState
I0912 18:28:57.389491   18209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:28:57.389529   18209 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:28:57.404319   18209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37407
I0912 18:28:57.404777   18209 main.go:141] libmachine: () Calling .GetVersion
I0912 18:28:57.405303   18209 main.go:141] libmachine: Using API Version  1
I0912 18:28:57.405353   18209 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:28:57.405703   18209 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:28:57.405876   18209 main.go:141] libmachine: (functional-003989) Calling .DriverName
I0912 18:28:57.406045   18209 ssh_runner.go:195] Run: systemctl --version
I0912 18:28:57.406074   18209 main.go:141] libmachine: (functional-003989) Calling .GetSSHHostname
I0912 18:28:57.409206   18209 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:28:57.409542   18209 main.go:141] libmachine: (functional-003989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:f1", ip: ""} in network mk-functional-003989: {Iface:virbr1 ExpiryTime:2023-09-12 19:25:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:f1 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:functional-003989 Clientid:01:52:54:00:f1:f3:f1}
I0912 18:28:57.409568   18209 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined IP address 192.168.50.43 and MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:28:57.409811   18209 main.go:141] libmachine: (functional-003989) Calling .GetSSHPort
I0912 18:28:57.409974   18209 main.go:141] libmachine: (functional-003989) Calling .GetSSHKeyPath
I0912 18:28:57.410093   18209 main.go:141] libmachine: (functional-003989) Calling .GetSSHUsername
I0912 18:28:57.410198   18209 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/functional-003989/id_rsa Username:docker}
I0912 18:28:57.562390   18209 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0912 18:28:57.632229   18209 main.go:141] libmachine: Making call to close driver server
I0912 18:28:57.632241   18209 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:28:57.632538   18209 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:28:57.632548   18209 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
I0912 18:28:57.632561   18209 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:28:57.632575   18209 main.go:141] libmachine: Making call to close driver server
I0912 18:28:57.632584   18209 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:28:57.632903   18209 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
I0912 18:28:57.632992   18209 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:28:57.633031   18209 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003989 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 821b3dfea27be | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b462ce0c8b1ff | 60.1MB |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-003989 | 4cd9f54a0d5cf | 30B    |
| docker.io/library/nginx                     | latest            | f5a6b296b8a29 | 187MB  |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 6cdbabde3874e | 73.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-003989 | 8ad7c444093c4 | 1.24MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.1           | 5c801295c21d0 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/google-containers/addon-resizer      | functional-003989 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003989 image ls --format table --alsologtostderr:
I0912 18:29:00.695970   18389 out.go:296] Setting OutFile to fd 1 ...
I0912 18:29:00.696066   18389 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:29:00.696073   18389 out.go:309] Setting ErrFile to fd 2...
I0912 18:29:00.696078   18389 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:29:00.696266   18389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
I0912 18:29:00.696768   18389 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:29:00.696858   18389 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:29:00.697200   18389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:29:00.697241   18389 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:29:00.711367   18389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
I0912 18:29:00.711751   18389 main.go:141] libmachine: () Calling .GetVersion
I0912 18:29:00.712310   18389 main.go:141] libmachine: Using API Version  1
I0912 18:29:00.712333   18389 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:29:00.712656   18389 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:29:00.712880   18389 main.go:141] libmachine: (functional-003989) Calling .GetState
I0912 18:29:00.714740   18389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:29:00.714775   18389 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:29:00.728391   18389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
I0912 18:29:00.728723   18389 main.go:141] libmachine: () Calling .GetVersion
I0912 18:29:00.729219   18389 main.go:141] libmachine: Using API Version  1
I0912 18:29:00.729241   18389 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:29:00.729516   18389 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:29:00.729692   18389 main.go:141] libmachine: (functional-003989) Calling .DriverName
I0912 18:29:00.729877   18389 ssh_runner.go:195] Run: systemctl --version
I0912 18:29:00.729910   18389 main.go:141] libmachine: (functional-003989) Calling .GetSSHHostname
I0912 18:29:00.732429   18389 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:29:00.732845   18389 main.go:141] libmachine: (functional-003989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:f1", ip: ""} in network mk-functional-003989: {Iface:virbr1 ExpiryTime:2023-09-12 19:25:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:f1 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:functional-003989 Clientid:01:52:54:00:f1:f3:f1}
I0912 18:29:00.732888   18389 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined IP address 192.168.50.43 and MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:29:00.733019   18389 main.go:141] libmachine: (functional-003989) Calling .GetSSHPort
I0912 18:29:00.733172   18389 main.go:141] libmachine: (functional-003989) Calling .GetSSHKeyPath
I0912 18:29:00.733307   18389 main.go:141] libmachine: (functional-003989) Calling .GetSSHUsername
I0912 18:29:00.733444   18389 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/functional-003989/id_rsa Username:docker}
I0912 18:29:00.824858   18389 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0912 18:29:00.850214   18389 main.go:141] libmachine: Making call to close driver server
I0912 18:29:00.850242   18389 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:29:00.850518   18389 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:29:00.850532   18389 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:29:00.850544   18389 main.go:141] libmachine: Making call to close driver server
I0912 18:29:00.850552   18389 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:29:00.850779   18389 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
I0912 18:29:00.850794   18389 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:29:00.850808   18389 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003989 image ls --format json --alsologtostderr:
[{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471d
f5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-003989"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"60100000"},{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"73100000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry
.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"4cd9f54a0d5cf777655187040e18da60e0deb7459a76d9d30626f01cd9416134","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-003989"],"size":"30"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size"
:"246000000"},{"id":"8ad7c444093c417fa9359d78a51bbdfdaf8ef247da727e501de2ff93e76b9070","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-003989"],"size":"1240000"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003989 image ls --format json --alsologtostderr:
I0912 18:29:00.484469   18365 out.go:296] Setting OutFile to fd 1 ...
I0912 18:29:00.484714   18365 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:29:00.484723   18365 out.go:309] Setting ErrFile to fd 2...
I0912 18:29:00.484728   18365 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:29:00.484963   18365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
I0912 18:29:00.485556   18365 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:29:00.485658   18365 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:29:00.486052   18365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:29:00.486100   18365 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:29:00.499946   18365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
I0912 18:29:00.500388   18365 main.go:141] libmachine: () Calling .GetVersion
I0912 18:29:00.500889   18365 main.go:141] libmachine: Using API Version  1
I0912 18:29:00.500913   18365 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:29:00.501242   18365 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:29:00.501392   18365 main.go:141] libmachine: (functional-003989) Calling .GetState
I0912 18:29:00.503335   18365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:29:00.503372   18365 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:29:00.517382   18365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
I0912 18:29:00.517731   18365 main.go:141] libmachine: () Calling .GetVersion
I0912 18:29:00.518169   18365 main.go:141] libmachine: Using API Version  1
I0912 18:29:00.518193   18365 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:29:00.518476   18365 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:29:00.518669   18365 main.go:141] libmachine: (functional-003989) Calling .DriverName
I0912 18:29:00.518843   18365 ssh_runner.go:195] Run: systemctl --version
I0912 18:29:00.518864   18365 main.go:141] libmachine: (functional-003989) Calling .GetSSHHostname
I0912 18:29:00.521494   18365 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:29:00.521889   18365 main.go:141] libmachine: (functional-003989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:f1", ip: ""} in network mk-functional-003989: {Iface:virbr1 ExpiryTime:2023-09-12 19:25:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:f1 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:functional-003989 Clientid:01:52:54:00:f1:f3:f1}
I0912 18:29:00.521920   18365 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined IP address 192.168.50.43 and MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:29:00.522033   18365 main.go:141] libmachine: (functional-003989) Calling .GetSSHPort
I0912 18:29:00.522211   18365 main.go:141] libmachine: (functional-003989) Calling .GetSSHKeyPath
I0912 18:29:00.522359   18365 main.go:141] libmachine: (functional-003989) Calling .GetSSHUsername
I0912 18:29:00.522497   18365 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/functional-003989/id_rsa Username:docker}
I0912 18:29:00.616998   18365 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0912 18:29:00.653767   18365 main.go:141] libmachine: Making call to close driver server
I0912 18:29:00.653779   18365 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:29:00.654031   18365 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:29:00.654051   18365 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:29:00.654051   18365 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
I0912 18:29:00.654060   18365 main.go:141] libmachine: Making call to close driver server
I0912 18:29:00.654069   18365 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:29:00.654303   18365 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:29:00.654320   18365 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003989 image ls --format yaml --alsologtostderr:
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 4cd9f54a0d5cf777655187040e18da60e0deb7459a76d9d30626f01cd9416134
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-003989
size: "30"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-003989
size: "32900000"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "73100000"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "60100000"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003989 image ls --format yaml --alsologtostderr:
I0912 18:28:57.676775   18254 out.go:296] Setting OutFile to fd 1 ...
I0912 18:28:57.677035   18254 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:28:57.677046   18254 out.go:309] Setting ErrFile to fd 2...
I0912 18:28:57.677054   18254 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:28:57.677247   18254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
I0912 18:28:57.677847   18254 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:28:57.678062   18254 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:28:57.678463   18254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:28:57.678521   18254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:28:57.692694   18254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
I0912 18:28:57.693150   18254 main.go:141] libmachine: () Calling .GetVersion
I0912 18:28:57.693730   18254 main.go:141] libmachine: Using API Version  1
I0912 18:28:57.693762   18254 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:28:57.694096   18254 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:28:57.694299   18254 main.go:141] libmachine: (functional-003989) Calling .GetState
I0912 18:28:57.696015   18254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:28:57.696057   18254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:28:57.710084   18254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
I0912 18:28:57.710508   18254 main.go:141] libmachine: () Calling .GetVersion
I0912 18:28:57.710992   18254 main.go:141] libmachine: Using API Version  1
I0912 18:28:57.711021   18254 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:28:57.711344   18254 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:28:57.711517   18254 main.go:141] libmachine: (functional-003989) Calling .DriverName
I0912 18:28:57.711701   18254 ssh_runner.go:195] Run: systemctl --version
I0912 18:28:57.711724   18254 main.go:141] libmachine: (functional-003989) Calling .GetSSHHostname
I0912 18:28:57.714825   18254 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:28:57.715292   18254 main.go:141] libmachine: (functional-003989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:f1", ip: ""} in network mk-functional-003989: {Iface:virbr1 ExpiryTime:2023-09-12 19:25:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:f1 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:functional-003989 Clientid:01:52:54:00:f1:f3:f1}
I0912 18:28:57.715321   18254 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined IP address 192.168.50.43 and MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:28:57.715525   18254 main.go:141] libmachine: (functional-003989) Calling .GetSSHPort
I0912 18:28:57.715701   18254 main.go:141] libmachine: (functional-003989) Calling .GetSSHKeyPath
I0912 18:28:57.715864   18254 main.go:141] libmachine: (functional-003989) Calling .GetSSHUsername
I0912 18:28:57.716018   18254 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/functional-003989/id_rsa Username:docker}
I0912 18:28:57.805473   18254 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0912 18:28:57.844755   18254 main.go:141] libmachine: Making call to close driver server
I0912 18:28:57.844768   18254 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:28:57.845034   18254 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:28:57.845054   18254 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:28:57.845066   18254 main.go:141] libmachine: Making call to close driver server
I0912 18:28:57.845076   18254 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:28:57.845076   18254 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
I0912 18:28:57.845292   18254 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:28:57.845308   18254 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:28:57.845336   18254 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh pgrep buildkitd: exit status 1 (183.027994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image build -t localhost/my-image:functional-003989 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image build -t localhost/my-image:functional-003989 testdata/build --alsologtostderr: (2.218295392s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003989 image build -t localhost/my-image:functional-003989 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in f618f6762270
Removing intermediate container f618f6762270
---> 151a3d357886
Step 3/3 : ADD content.txt /
---> 8ad7c444093c
Successfully built 8ad7c444093c
Successfully tagged localhost/my-image:functional-003989
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003989 image build -t localhost/my-image:functional-003989 testdata/build --alsologtostderr:
I0912 18:28:58.073143   18307 out.go:296] Setting OutFile to fd 1 ...
I0912 18:28:58.073452   18307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:28:58.073498   18307 out.go:309] Setting ErrFile to fd 2...
I0912 18:28:58.073517   18307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:28:58.074037   18307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
I0912 18:28:58.075067   18307 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:28:58.075536   18307 config.go:182] Loaded profile config "functional-003989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 18:28:58.075908   18307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:28:58.075940   18307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:28:58.090066   18307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
I0912 18:28:58.090485   18307 main.go:141] libmachine: () Calling .GetVersion
I0912 18:28:58.091047   18307 main.go:141] libmachine: Using API Version  1
I0912 18:28:58.091077   18307 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:28:58.091401   18307 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:28:58.091584   18307 main.go:141] libmachine: (functional-003989) Calling .GetState
I0912 18:28:58.093120   18307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0912 18:28:58.093157   18307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:28:58.107054   18307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
I0912 18:28:58.107439   18307 main.go:141] libmachine: () Calling .GetVersion
I0912 18:28:58.107897   18307 main.go:141] libmachine: Using API Version  1
I0912 18:28:58.107923   18307 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:28:58.108194   18307 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:28:58.108377   18307 main.go:141] libmachine: (functional-003989) Calling .DriverName
I0912 18:28:58.108611   18307 ssh_runner.go:195] Run: systemctl --version
I0912 18:28:58.108644   18307 main.go:141] libmachine: (functional-003989) Calling .GetSSHHostname
I0912 18:28:58.111168   18307 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:28:58.111568   18307 main.go:141] libmachine: (functional-003989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:f1", ip: ""} in network mk-functional-003989: {Iface:virbr1 ExpiryTime:2023-09-12 19:25:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:f1 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:functional-003989 Clientid:01:52:54:00:f1:f3:f1}
I0912 18:28:58.111611   18307 main.go:141] libmachine: (functional-003989) DBG | domain functional-003989 has defined IP address 192.168.50.43 and MAC address 52:54:00:f1:f3:f1 in network mk-functional-003989
I0912 18:28:58.111763   18307 main.go:141] libmachine: (functional-003989) Calling .GetSSHPort
I0912 18:28:58.111952   18307 main.go:141] libmachine: (functional-003989) Calling .GetSSHKeyPath
I0912 18:28:58.112138   18307 main.go:141] libmachine: (functional-003989) Calling .GetSSHUsername
I0912 18:28:58.112298   18307 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/functional-003989/id_rsa Username:docker}
I0912 18:28:58.197498   18307 build_images.go:151] Building image from path: /tmp/build.3333019650.tar
I0912 18:28:58.197565   18307 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 18:28:58.209377   18307 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3333019650.tar
I0912 18:28:58.217555   18307 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3333019650.tar: stat -c "%s %y" /var/lib/minikube/build/build.3333019650.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3333019650.tar': No such file or directory
I0912 18:28:58.217586   18307 ssh_runner.go:362] scp /tmp/build.3333019650.tar --> /var/lib/minikube/build/build.3333019650.tar (3072 bytes)
I0912 18:28:58.258545   18307 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3333019650
I0912 18:28:58.294874   18307 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3333019650 -xf /var/lib/minikube/build/build.3333019650.tar
I0912 18:28:58.311549   18307 docker.go:339] Building image: /var/lib/minikube/build/build.3333019650
I0912 18:28:58.311633   18307 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-003989 /var/lib/minikube/build/build.3333019650
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0912 18:29:00.212348   18307 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-003989 /var/lib/minikube/build/build.3333019650: (1.900690447s)
I0912 18:29:00.212409   18307 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3333019650
I0912 18:29:00.222602   18307 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3333019650.tar
I0912 18:29:00.246734   18307 build_images.go:207] Built localhost/my-image:functional-003989 from /tmp/build.3333019650.tar
I0912 18:29:00.246773   18307 build_images.go:123] succeeded building to: functional-003989
I0912 18:29:00.246780   18307 build_images.go:124] failed building to: 
I0912 18:29:00.246803   18307 main.go:141] libmachine: Making call to close driver server
I0912 18:29:00.246814   18307 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:29:00.247107   18307 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
I0912 18:29:00.247153   18307 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:29:00.247167   18307 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:29:00.247186   18307 main.go:141] libmachine: Making call to close driver server
I0912 18:29:00.247199   18307 main.go:141] libmachine: (functional-003989) Calling .Close
I0912 18:29:00.247417   18307 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:29:00.247434   18307 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:29:00.247451   18307 main.go:141] libmachine: (functional-003989) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-003989
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image load --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr
E0912 18:28:10.837220   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image load --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr: (4.323873497s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image load --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image load --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr: (2.218934391s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-003989
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image load --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image load --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr: (4.623282025s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-003989 docker-env) && out/minikube-linux-amd64 status -p functional-003989"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-003989 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 service list -o json
functional_test.go:1493: Took "272.327004ms" to run "out/minikube-linux-amd64 -p functional-003989 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.43:31326
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image save gcr.io/google-containers/addon-resizer:functional-003989 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image save gcr.io/google-containers/addon-resizer:functional-003989 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.667577141s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.43:31326
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image rm gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "247.645817ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "48.212258ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "207.907411ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "40.50892ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdany-port2635278857/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694543305820248243" to /tmp/TestFunctionalparallelMountCmdany-port2635278857/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694543305820248243" to /tmp/TestFunctionalparallelMountCmdany-port2635278857/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694543305820248243" to /tmp/TestFunctionalparallelMountCmdany-port2635278857/001/test-1694543305820248243
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (206.870808ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0912 18:28:26.198701   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 18:28 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 18:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 18:28 test-1694543305820248243
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh cat /mount-9p/test-1694543305820248243
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-003989 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [98282540-6708-4f0c-845c-52ef052e1030] Pending
helpers_test.go:344: "busybox-mount" [98282540-6708-4f0c-845c-52ef052e1030] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [98282540-6708-4f0c-845c-52ef052e1030] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [98282540-6708-4f0c-845c-52ef052e1030] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.015992162s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-003989 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdany-port2635278857/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.541486124s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-003989
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 image save --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-003989 image save --daemon gcr.io/google-containers/addon-resizer:functional-003989 --alsologtostderr: (2.209948692s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-003989
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdspecific-port2357732257/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.281944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdspecific-port2357732257/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh "sudo umount -f /mount-9p": exit status 1 (237.890328ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-003989 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdspecific-port2357732257/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1215725004/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1215725004/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1215725004/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T" /mount1: exit status 1 (336.475084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003989 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-003989 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1215725004/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1215725004/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1215725004/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-003989
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-003989
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-003989
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (395.96s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-875236 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-875236 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m16.848306622s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-875236 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0912 18:57:14.122474   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-875236 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.754186506s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-875236 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-875236 addons enable gvisor: (2.783785417s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d6250389-c03b-411e-a9cd-6859f9fead5d] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.023554597s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-875236 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [93ddff2a-5c41-41df-9a4d-a8583ca19ffb] Pending
helpers_test.go:344: "nginx-gvisor" [93ddff2a-5c41-41df-9a4d-a8583ca19ffb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [93ddff2a-5c41-41df-9a4d-a8583ca19ffb] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 52.029723379s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-875236
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-875236: (1m32.381793733s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-875236 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0912 19:00:17.254163   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-875236 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m11.659152593s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d6250389-c03b-411e-a9cd-6859f9fead5d] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [d6250389-c03b-411e-a9cd-6859f9fead5d] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.031635133s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [93ddff2a-5c41-41df-9a4d-a8583ca19ffb] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.011154377s
helpers_test.go:175: Cleaning up "gvisor-875236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-875236
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-875236: (1.094286003s)
--- PASS: TestGvisorAddon (395.96s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-072259 --driver=kvm2 
E0912 18:29:27.640773   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-072259 --driver=kvm2 : (51.057120978s)
--- PASS: TestImageBuild/serial/Setup (51.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-072259
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-072259: (1.102336448s)
--- PASS: TestImageBuild/serial/NormalBuild (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-072259
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-072259: (1.224508983s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-072259
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-072259
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (109.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-780835 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0912 18:30:49.561901   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-780835 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m49.11286545s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (109.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons enable ingress --alsologtostderr -v=5: (14.916176087s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (33.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-780835 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-780835 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.370039618s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-780835 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-780835 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [43d489a3-bc67-4cd8-81dc-04cdd4ac9853] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [43d489a3-bc67-4cd8-81dc-04cdd4ac9853] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.022654002s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-780835 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-780835 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-780835 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.199
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons disable ingress-dns --alsologtostderr -v=1: (2.043900272s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-780835 addons disable ingress --alsologtostderr -v=1: (7.619217187s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (33.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-401729 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0912 18:33:05.715878   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:33:07.910387   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:07.915666   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:07.925915   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:07.946169   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:07.986445   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:08.066735   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:08.227164   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:08.547729   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:09.188655   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:10.469278   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:13.030035   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:18.150273   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:28.391105   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:33:33.402746   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:33:48.872203   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-401729 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m4.002200781s)
--- PASS: TestJSONOutput/start/Command (64.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-401729 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-401729 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-401729 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-401729 --output=json --user=testUser: (8.091621914s)
--- PASS: TestJSONOutput/stop/Command (8.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-145765 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-145765 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.30895ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cff3bea0-3921-4b76-8115-8a3dd75242b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-145765] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d95141c-58d3-4fde-98fe-89292215719f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17233"}}
	{"specversion":"1.0","id":"1b8f3231-d91f-4444-8f59-bf5984482c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1313bb43-ffa8-42f7-b825-629256e6120d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig"}}
	{"specversion":"1.0","id":"49811189-898b-4be9-bca6-fb32eaac7a51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube"}}
	{"specversion":"1.0","id":"c3509c86-ef9f-4d5b-9d70-cc3523fa98d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7bcd2441-7bd8-49bf-8a3c-6d01098a8599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"abffc542-99f3-40f3-8b1a-2dc22e176bc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-145765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-145765
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (104.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-011669 --driver=kvm2 
E0912 18:34:29.833280   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-011669 --driver=kvm2 : (50.674204192s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-013808 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-013808 --driver=kvm2 : (51.503937735s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-011669
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-013808
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-013808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-013808
helpers_test.go:175: Cleaning up "first-011669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-011669
--- PASS: TestMinikubeProfile (104.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-159208 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0912 18:35:51.755124   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-159208 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.49590927s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-159208 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-159208 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-178595 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-178595 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.282310274s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-178595 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-178595 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-159208 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-178595 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-178595 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-178595
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-178595: (2.073682567s)
--- PASS: TestMountStart/serial/Stop (2.07s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-178595
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-178595: (23.394531112s)
E0912 18:37:14.122725   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:14.128002   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:14.138246   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:14.158528   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:14.198838   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (24.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-178595 ssh -- ls /minikube-host
E0912 18:37:14.279519   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:14.439769   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-178595 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-348977 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0912 18:37:16.681882   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:19.243015   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:24.363625   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:34.604147   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:37:55.084473   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 18:38:05.715061   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:38:07.910745   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:38:35.596130   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:38:36.045067   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-348977 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m6.716741889s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-348977 -- rollout status deployment/busybox: (2.076028938s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-k9v4h -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-lzrq4 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-k9v4h -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-lzrq4 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-k9v4h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-lzrq4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-k9v4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-k9v4h -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-lzrq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-348977 -- exec busybox-5bc68d56bd-lzrq4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-348977 -v 3 --alsologtostderr
E0912 18:39:57.965908   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-348977 -v 3 --alsologtostderr: (46.258830426s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.82s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp testdata/cp-test.txt multinode-348977:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977:/home/docker/cp-test.txt multinode-348977-m02:/home/docker/cp-test_multinode-348977_multinode-348977-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m02 "sudo cat /home/docker/cp-test_multinode-348977_multinode-348977-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977:/home/docker/cp-test.txt multinode-348977-m03:/home/docker/cp-test_multinode-348977_multinode-348977-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m03 "sudo cat /home/docker/cp-test_multinode-348977_multinode-348977-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp testdata/cp-test.txt multinode-348977-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt multinode-348977:/home/docker/cp-test_multinode-348977-m02_multinode-348977.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977 "sudo cat /home/docker/cp-test_multinode-348977-m02_multinode-348977.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977-m02:/home/docker/cp-test.txt multinode-348977-m03:/home/docker/cp-test_multinode-348977-m02_multinode-348977-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m03 "sudo cat /home/docker/cp-test_multinode-348977-m02_multinode-348977-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp testdata/cp-test.txt multinode-348977-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile602775753/001/cp-test_multinode-348977-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt multinode-348977:/home/docker/cp-test_multinode-348977-m03_multinode-348977.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977 "sudo cat /home/docker/cp-test_multinode-348977-m03_multinode-348977.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 cp multinode-348977-m03:/home/docker/cp-test.txt multinode-348977-m02:/home/docker/cp-test_multinode-348977-m03_multinode-348977-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 ssh -n multinode-348977-m02 "sudo cat /home/docker/cp-test_multinode-348977-m03_multinode-348977-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-348977 node stop m03: (3.077421084s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-348977 status: exit status 7 (411.188339ms)

                                                
                                                
-- stdout --
	multinode-348977
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-348977-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-348977-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr: exit status 7 (418.640337ms)

                                                
                                                
-- stdout --
	multinode-348977
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-348977-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-348977-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:40:25.045306   25320 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:40:25.045538   25320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:40:25.045548   25320 out.go:309] Setting ErrFile to fd 2...
	I0912 18:40:25.045553   25320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:40:25.045736   25320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:40:25.045877   25320 out.go:303] Setting JSON to false
	I0912 18:40:25.045925   25320 mustload.go:65] Loading cluster: multinode-348977
	I0912 18:40:25.045965   25320 notify.go:220] Checking for updates...
	I0912 18:40:25.046496   25320 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:40:25.046514   25320 status.go:255] checking status of multinode-348977 ...
	I0912 18:40:25.046934   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.046982   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.064166   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0912 18:40:25.065425   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.065982   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.066010   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.066398   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.066561   25320 main.go:141] libmachine: (multinode-348977) Calling .GetState
	I0912 18:40:25.068255   25320 status.go:330] multinode-348977 host status = "Running" (err=<nil>)
	I0912 18:40:25.068267   25320 host.go:66] Checking if "multinode-348977" exists ...
	I0912 18:40:25.068503   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.068558   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.083986   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I0912 18:40:25.084337   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.084847   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.084881   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.085168   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.085335   25320 main.go:141] libmachine: (multinode-348977) Calling .GetIP
	I0912 18:40:25.088080   25320 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:40:25.088505   25320 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:37:31 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:40:25.088535   25320 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:40:25.088674   25320 host.go:66] Checking if "multinode-348977" exists ...
	I0912 18:40:25.088959   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.088989   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.104217   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0912 18:40:25.104610   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.105030   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.105059   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.105338   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.105513   25320 main.go:141] libmachine: (multinode-348977) Calling .DriverName
	I0912 18:40:25.105660   25320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 18:40:25.105677   25320 main.go:141] libmachine: (multinode-348977) Calling .GetSSHHostname
	I0912 18:40:25.108170   25320 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:40:25.108593   25320 main.go:141] libmachine: (multinode-348977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:2d:65", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:37:31 +0000 UTC Type:0 Mac:52:54:00:38:2d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-348977 Clientid:01:52:54:00:38:2d:65}
	I0912 18:40:25.108614   25320 main.go:141] libmachine: (multinode-348977) DBG | domain multinode-348977 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:2d:65 in network mk-multinode-348977
	I0912 18:40:25.108739   25320 main.go:141] libmachine: (multinode-348977) Calling .GetSSHPort
	I0912 18:40:25.108936   25320 main.go:141] libmachine: (multinode-348977) Calling .GetSSHKeyPath
	I0912 18:40:25.109067   25320 main.go:141] libmachine: (multinode-348977) Calling .GetSSHUsername
	I0912 18:40:25.109212   25320 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977/id_rsa Username:docker}
	I0912 18:40:25.198048   25320 ssh_runner.go:195] Run: systemctl --version
	I0912 18:40:25.203704   25320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:40:25.217712   25320 kubeconfig.go:92] found "multinode-348977" server: "https://192.168.39.209:8443"
	I0912 18:40:25.217740   25320 api_server.go:166] Checking apiserver status ...
	I0912 18:40:25.217768   25320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:40:25.229190   25320 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1874/cgroup
	I0912 18:40:25.237369   25320 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod4abe28b137e1ba2381404609e97bb3f7/3627cce96a103f54770606c5861a587b4a7b7dd97850e7842524f7009b6f64bf"
	I0912 18:40:25.237432   25320 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4abe28b137e1ba2381404609e97bb3f7/3627cce96a103f54770606c5861a587b4a7b7dd97850e7842524f7009b6f64bf/freezer.state
	I0912 18:40:25.246023   25320 api_server.go:204] freezer state: "THAWED"
	I0912 18:40:25.246052   25320 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0912 18:40:25.250989   25320 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0912 18:40:25.251010   25320 status.go:421] multinode-348977 apiserver status = Running (err=<nil>)
	I0912 18:40:25.251022   25320 status.go:257] multinode-348977 status: &{Name:multinode-348977 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:40:25.251043   25320 status.go:255] checking status of multinode-348977-m02 ...
	I0912 18:40:25.251360   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.251389   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.265734   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0912 18:40:25.266169   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.266556   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.266590   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.266890   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.267081   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .GetState
	I0912 18:40:25.268489   25320 status.go:330] multinode-348977-m02 host status = "Running" (err=<nil>)
	I0912 18:40:25.268502   25320 host.go:66] Checking if "multinode-348977-m02" exists ...
	I0912 18:40:25.268755   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.268785   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.282619   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I0912 18:40:25.283006   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.283387   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.283427   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.283730   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.283895   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .GetIP
	I0912 18:40:25.286795   25320 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:40:25.287230   25320 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:38:48 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:40:25.287268   25320 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:40:25.287364   25320 host.go:66] Checking if "multinode-348977-m02" exists ...
	I0912 18:40:25.287649   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.287691   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.301597   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0912 18:40:25.301980   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.302448   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.302468   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.302792   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.302977   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .DriverName
	I0912 18:40:25.303174   25320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 18:40:25.303196   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHHostname
	I0912 18:40:25.305678   25320 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:40:25.306137   25320 main.go:141] libmachine: (multinode-348977-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:ce", ip: ""} in network mk-multinode-348977: {Iface:virbr1 ExpiryTime:2023-09-12 19:38:48 +0000 UTC Type:0 Mac:52:54:00:fb:c0:ce Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-348977-m02 Clientid:01:52:54:00:fb:c0:ce}
	I0912 18:40:25.306172   25320 main.go:141] libmachine: (multinode-348977-m02) DBG | domain multinode-348977-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:c0:ce in network mk-multinode-348977
	I0912 18:40:25.306288   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHPort
	I0912 18:40:25.306458   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHKeyPath
	I0912 18:40:25.306621   25320 main.go:141] libmachine: (multinode-348977-m02) Calling .GetSSHUsername
	I0912 18:40:25.306774   25320 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3674/.minikube/machines/multinode-348977-m02/id_rsa Username:docker}
	I0912 18:40:25.390542   25320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:40:25.403207   25320 status.go:257] multinode-348977-m02 status: &{Name:multinode-348977-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:40:25.403235   25320 status.go:255] checking status of multinode-348977-m03 ...
	I0912 18:40:25.403535   25320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:40:25.403563   25320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:40:25.417957   25320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0912 18:40:25.418376   25320 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:40:25.418943   25320 main.go:141] libmachine: Using API Version  1
	I0912 18:40:25.418973   25320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:40:25.419281   25320 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:40:25.419471   25320 main.go:141] libmachine: (multinode-348977-m03) Calling .GetState
	I0912 18:40:25.420994   25320 status.go:330] multinode-348977-m03 host status = "Stopped" (err=<nil>)
	I0912 18:40:25.421008   25320 status.go:343] host is not running, skipping remaining checks
	I0912 18:40:25.421015   25320 status.go:257] multinode-348977-m03 status: &{Name:multinode-348977-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.91s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-348977 node start m03 --alsologtostderr: (31.763804336s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (112.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 stop
E0912 18:43:05.716012   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:43:07.910457   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 18:44:28.763872   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-348977 stop: (1m52.114775241s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-348977 status: exit status 7 (72.239575ms)

                                                
                                                
-- stdout --
	multinode-348977
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-348977-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr: exit status 7 (72.110765ms)

                                                
                                                
-- stdout --
	multinode-348977
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-348977-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:44:50.134506   26841 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:44:50.134758   26841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:44:50.134768   26841 out.go:309] Setting ErrFile to fd 2...
	I0912 18:44:50.134773   26841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:44:50.134943   26841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3674/.minikube/bin
	I0912 18:44:50.135095   26841 out.go:303] Setting JSON to false
	I0912 18:44:50.135121   26841 mustload.go:65] Loading cluster: multinode-348977
	I0912 18:44:50.135236   26841 notify.go:220] Checking for updates...
	I0912 18:44:50.135484   26841 config.go:182] Loaded profile config "multinode-348977": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 18:44:50.135496   26841 status.go:255] checking status of multinode-348977 ...
	I0912 18:44:50.135869   26841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:44:50.135926   26841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:44:50.149825   26841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0912 18:44:50.150272   26841 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:44:50.150797   26841 main.go:141] libmachine: Using API Version  1
	I0912 18:44:50.150824   26841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:44:50.151162   26841 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:44:50.151332   26841 main.go:141] libmachine: (multinode-348977) Calling .GetState
	I0912 18:44:50.152681   26841 status.go:330] multinode-348977 host status = "Stopped" (err=<nil>)
	I0912 18:44:50.152693   26841 status.go:343] host is not running, skipping remaining checks
	I0912 18:44:50.152698   26841 status.go:257] multinode-348977 status: &{Name:multinode-348977 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:44:50.152716   26841 status.go:255] checking status of multinode-348977-m02 ...
	I0912 18:44:50.152998   26841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0912 18:44:50.153029   26841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:44:50.167442   26841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0912 18:44:50.167797   26841 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:44:50.168200   26841 main.go:141] libmachine: Using API Version  1
	I0912 18:44:50.168218   26841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:44:50.168573   26841 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:44:50.168756   26841 main.go:141] libmachine: (multinode-348977-m02) Calling .GetState
	I0912 18:44:50.170472   26841 status.go:330] multinode-348977-m02 host status = "Stopped" (err=<nil>)
	I0912 18:44:50.170484   26841 status.go:343] host is not running, skipping remaining checks
	I0912 18:44:50.170489   26841 status.go:257] multinode-348977-m02 status: &{Name:multinode-348977-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (112.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (111.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-348977 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-348977 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m50.492957272s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-348977 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (111.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-348977
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-348977-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-348977-m02 --driver=kvm2 : exit status 14 (55.928091ms)

                                                
                                                
-- stdout --
	* [multinode-348977-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-348977-m02' is duplicated with machine name 'multinode-348977-m02' in profile 'multinode-348977'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-348977-m03 --driver=kvm2 
E0912 18:47:14.122770   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-348977-m03 --driver=kvm2 : (49.585955599s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-348977
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-348977: exit status 80 (206.665194ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-348977
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-348977-m03 already exists in multinode-348977-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-348977-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.64s)

                                                
                                    
x
+
TestPreload (169.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-777818 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0912 18:48:05.715390   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:48:07.910902   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-777818 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m27.337041129s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-777818 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-777818
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-777818: (13.089722913s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-777818 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0912 18:49:30.957282   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-777818 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m6.749685384s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-777818 image list
helpers_test.go:175: Cleaning up "test-preload-777818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-777818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-777818: (1.006550303s)
--- PASS: TestPreload (169.15s)

                                                
                                    
x
+
TestScheduledStopUnix (123.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-834093 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-834093 --memory=2048 --driver=kvm2 : (51.863011618s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-834093 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-834093 -n scheduled-stop-834093
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-834093 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-834093 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-834093 -n scheduled-stop-834093
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-834093
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-834093 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0912 18:52:14.122763   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-834093
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-834093: exit status 7 (55.543018ms)

                                                
                                                
-- stdout --
	scheduled-stop-834093
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-834093 -n scheduled-stop-834093
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-834093 -n scheduled-stop-834093: exit status 7 (55.42441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-834093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-834093
--- PASS: TestScheduledStopUnix (123.39s)

                                                
                                    
x
+
TestSkaffold (139.66s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1783760215 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-108101 --memory=2600 --driver=kvm2 
E0912 18:53:05.716250   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 18:53:07.910112   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-108101 --memory=2600 --driver=kvm2 : (51.728874943s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1783760215 run --minikube-profile skaffold-108101 --kube-context skaffold-108101 --status-check=true --port-forward=false --interactive=false
E0912 18:53:37.166847   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1783760215 run --minikube-profile skaffold-108101 --kube-context skaffold-108101 --status-check=true --port-forward=false --interactive=false: (1m16.007710001s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-67f4c8f65-86bs9" [bb5165cb-9862-48bf-b126-1db058aae914] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.018943957s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7648959fc5-xj9rv" [3197e58e-1818-4f08-9940-25bbeb19b827] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010286143s
helpers_test.go:175: Cleaning up "skaffold-108101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-108101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-108101: (1.163060389s)
--- PASS: TestSkaffold (139.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (190.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1300077339.exe start -p running-upgrade-994243 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1300077339.exe start -p running-upgrade-994243 --memory=2200 --vm-driver=kvm2 : (1m50.195244846s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-994243 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0912 19:03:52.806485   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-994243 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m18.8793649s)
helpers_test.go:175: Cleaning up "running-upgrade-994243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-994243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-994243: (1.442537678s)
--- PASS: TestRunningBinaryUpgrade (190.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (283.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m41.60391643s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-844663
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-844663: (12.676680948s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-844663 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-844663 status --format={{.Host}}: exit status 7 (65.474115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 : (1m52.987311777s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-844663 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (86.928945ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-844663] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-844663
	    minikube start -p kubernetes-upgrade-844663 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8446632 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-844663 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-844663 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 : (54.357117343s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-844663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-844663
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-844663: (1.26454401s)
--- PASS: TestKubernetesUpgrade (283.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685916 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-685916 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (61.100558ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-685916] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3674/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3674/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (63.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685916 --driver=kvm2 
E0912 18:59:36.292511   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.298189   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.308534   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.328910   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.369361   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.449722   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.610173   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:36.930552   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685916 --driver=kvm2 : (1m3.019899733s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-685916 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (63.47s)

                                                
                                    
x
+
TestPause/serial/Start (93.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-617192 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0912 18:59:38.851046   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:41.412138   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:46.532749   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 18:59:56.773081   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-617192 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m33.225981634s)
--- PASS: TestPause/serial/Start (93.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685916 --no-kubernetes --driver=kvm2 
E0912 19:00:58.214650   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685916 --no-kubernetes --driver=kvm2 : (24.390202861s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-685916 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-685916 status -o json: exit status 2 (242.31469ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-685916","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-685916
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-685916: (1.13275176s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685916 --no-kubernetes --driver=kvm2 
E0912 19:01:08.764550   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685916 --no-kubernetes --driver=kvm2 : (30.072017841s)
--- PASS: TestNoKubernetes/serial/Start (30.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-617192 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-617192 --alsologtostderr -v=1 --driver=kvm2 : (57.679109975s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (216.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3295335496.exe start -p stopped-upgrade-297436 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3295335496.exe start -p stopped-upgrade-297436 --memory=2200 --vm-driver=kvm2 : (2m19.723249531s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3295335496.exe -p stopped-upgrade-297436 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3295335496.exe -p stopped-upgrade-297436 stop: (13.270222798s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-297436 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0912 19:04:36.292332   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-297436 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m3.240431461s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (216.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-685916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-685916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (234.607702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-685916
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-685916: (2.092797218s)
--- PASS: TestNoKubernetes/serial/Stop (2.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685916 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685916 --driver=kvm2 : (25.475444352s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-685916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-685916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.072711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Pause (1.43s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-617192 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-617192 --alsologtostderr -v=5: (1.43109255s)
--- PASS: TestPause/serial/Pause (1.43s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-617192 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-617192 --output=json --layout=cluster: exit status 2 (275.109124ms)

                                                
                                                
-- stdout --
	{"Name":"pause-617192","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-617192","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-617192 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-617192 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-617192 --alsologtostderr -v=5
E0912 19:02:14.123387   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-617192 --alsologtostderr -v=5: (1.130793911s)
--- PASS: TestPause/serial/DeletePaused (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0912 19:02:20.135841   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.114897643s)
--- PASS: TestPause/serial/VerifyDeletedResources (16.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-295705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-295705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m24.940618451s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (127.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-298911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1
E0912 19:02:36.004710   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:41.124985   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:51.365212   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:03:05.715150   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 19:03:07.910974   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 19:03:11.845485   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-298911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1: (2m7.989700347s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (127.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-298911 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d80b7a4-22e6-42cd-99a4-224ad92f7dda] Pending
helpers_test.go:344: "busybox" [2d80b7a4-22e6-42cd-99a4-224ad92f7dda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d80b7a4-22e6-42cd-99a4-224ad92f7dda] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.26373409s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-298911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-298911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-298911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.180706748s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-298911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-298911 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-298911 --alsologtostderr -v=3: (13.101027991s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-295705 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fce70753-d932-443c-b783-c42bb45983c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fce70753-d932-443c-b783-c42bb45983c6] Running
E0912 19:05:03.976554   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.047965518s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-295705 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-297436
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-297436: (2.086322668s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-310267 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-310267 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1: (1m13.739470573s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298911 -n no-preload-298911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298911 -n no-preload-298911: exit status 7 (77.555868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-298911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-298911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-298911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1: (5m34.140337639s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298911 -n no-preload-298911
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-295705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-295705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016426048s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-295705 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-295705 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-295705 --alsologtostderr -v=3: (13.201447311s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (122.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-795683 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1
E0912 19:05:14.726720   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-795683 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1: (2m2.775770911s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (122.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295705 -n old-k8s-version-295705
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295705 -n old-k8s-version-295705: exit status 7 (61.142021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-295705 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (500.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-295705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0912 19:06:10.958424   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-295705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (8m20.680058855s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295705 -n old-k8s-version-295705
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (500.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-310267 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d7cd039-5b02-46d2-ae01-3b4f97ba1f6e] Pending
helpers_test.go:344: "busybox" [6d7cd039-5b02-46d2-ae01-3b4f97ba1f6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6d7cd039-5b02-46d2-ae01-3b4f97ba1f6e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.024032953s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-310267 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-310267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-310267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.089084974s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-310267 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-310267 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-310267 --alsologtostderr -v=3: (13.135999944s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-310267 -n embed-certs-310267
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-310267 -n embed-certs-310267: exit status 7 (75.328429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-310267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-310267 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-310267 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1: (5m44.598390836s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-310267 -n embed-certs-310267
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-795683 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6819d828-787f-462c-b1f5-81bc67d4a2f0] Pending
E0912 19:07:14.122448   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6819d828-787f-462c-b1f5-81bc67d4a2f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6819d828-787f-462c-b1f5-81bc67d4a2f0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.032111441s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-795683 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-795683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-795683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.121429145s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-795683 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-795683 --alsologtostderr -v=3
E0912 19:07:30.884742   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-795683 --alsologtostderr -v=3: (13.108487715s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683: exit status 7 (68.319023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-795683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-795683 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1
E0912 19:07:58.567063   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:08:05.715772   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 19:08:07.910718   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
E0912 19:09:36.292800   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
E0912 19:10:17.167411   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-795683 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1: (5m33.639964415s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-92bht" [14b4d5a9-a7b1-4782-bb95-3e05b09b7066] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020282439s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-92bht" [14b4d5a9-a7b1-4782-bb95-3e05b09b7066] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011302921s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-298911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-298911 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-298911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298911 -n no-preload-298911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298911 -n no-preload-298911: exit status 2 (244.208347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-298911 -n no-preload-298911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-298911 -n no-preload-298911: exit status 2 (245.874903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-298911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298911 -n no-preload-298911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-298911 -n no-preload-298911
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-272718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-272718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1: (1m12.787728195s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-272718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-272718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.220194345s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-272718 --alsologtostderr -v=3
E0912 19:12:14.122564   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-272718 --alsologtostderr -v=3: (8.111438946s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-272718 -n newest-cni-272718
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-272718 -n newest-cni-272718: exit status 7 (78.089287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-272718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-272718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-272718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1: (50.563133876s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-272718 -n newest-cni-272718
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qhznb" [8c6e7991-f271-44a6-86c3-221e19566f0c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0912 19:12:30.884205   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qhznb" [8c6e7991-f271-44a6-86c3-221e19566f0c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.023064538s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qhznb" [8c6e7991-f271-44a6-86c3-221e19566f0c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017146832s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-310267 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-310267 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-310267 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-310267 -n embed-certs-310267
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-310267 -n embed-certs-310267: exit status 2 (276.016859ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-310267 -n embed-certs-310267
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-310267 -n embed-certs-310267: exit status 2 (268.702971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-310267 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-310267 -n embed-certs-310267
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-310267 -n embed-certs-310267
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (110.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0912 19:13:05.715494   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/addons-494250/client.crt: no such file or directory
E0912 19:13:07.910985   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/functional-003989/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m50.063960189s)
--- PASS: TestNetworkPlugins/group/auto/Start (110.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-272718 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-272718 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-272718 -n newest-cni-272718
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-272718 -n newest-cni-272718: exit status 2 (280.771932ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-272718 -n newest-cni-272718
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-272718 -n newest-cni-272718: exit status 2 (278.150359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-272718 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-272718 -n newest-cni-272718
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-272718 -n newest-cni-272718
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fn6jf" [45e37319-bb0d-4931-8038-53cefe45823b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fn6jf" [45e37319-bb0d-4931-8038-53cefe45823b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.025484948s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m32.314324497s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fn6jf" [45e37319-bb0d-4931-8038-53cefe45823b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018520964s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-795683 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-795683 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-795683 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683: exit status 2 (253.813944ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683: exit status 2 (262.231092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-795683 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-795683 -n default-k8s-diff-port-795683
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m50.403149335s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bsxb4" [789d945e-822c-4f8f-b180-6d170e98a9a8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016981566s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bsxb4" [789d945e-822c-4f8f-b180-6d170e98a9a8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011799456s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-295705 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-295705 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-295705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295705 -n old-k8s-version-295705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295705 -n old-k8s-version-295705: exit status 2 (299.275736ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-295705 -n old-k8s-version-295705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-295705 -n old-k8s-version-295705: exit status 2 (280.644277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-295705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-295705 --alsologtostderr -v=1: (2.352438312s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295705 -n old-k8s-version-295705
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-295705 -n old-k8s-version-295705
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0912 19:14:36.292109   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/skaffold-108101/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m41.348993038s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9p6bj" [9e024c51-46a3-4d20-b5b6-2e1af39bb81c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 19:14:43.533941   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:43.539361   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:43.549645   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:43.569944   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:43.611121   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:43.692152   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:43.852482   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:44.173419   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:44.813679   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:14:46.094850   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9p6bj" [9e024c51-46a3-4d20-b5b6-2e1af39bb81c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.024102478s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tr8sk" [5ccff8a6-77b5-4831-ab57-eb6b600613ec] Running
E0912 19:14:48.655831   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028441109s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-591320 replace --force -f testdata/netcat-deployment.yaml: (1.395658146s)
E0912 19:14:53.776083   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8zwmz" [672af5d9-546a-4ff5-8226-89717471d733] Pending
helpers_test.go:344: "netcat-56589dfd74-8zwmz" [672af5d9-546a-4ff5-8226-89717471d733] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8zwmz" [672af5d9-546a-4ff5-8226-89717471d733] Running
E0912 19:15:00.824201   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
E0912 19:15:02.105028   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
E0912 19:15:04.016393   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
E0912 19:15:04.665917   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.034767134s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (84.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0912 19:15:20.027672   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m24.262223395s)
--- PASS: TestNetworkPlugins/group/false/Start (84.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (107.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m47.829976626s)
--- PASS: TestNetworkPlugins/group/flannel/Start (107.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d5hgb" [592f1f8d-31b7-4e7c-b959-0ce4210264bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.025911622s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vcvvp" [de13dc67-870b-4edd-8cfb-59d12e0c797f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 19:15:40.508291   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vcvvp" [de13dc67-870b-4edd-8cfb-59d12e0c797f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012068554s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gpdwn" [f4a9787d-6cb9-4b25-beba-b1de9040a21c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gpdwn" [f4a9787d-6cb9-4b25-beba-b1de9040a21c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.016441089s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m23.080784895s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (108.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0912 19:16:21.468548   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m48.234459723s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (108.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-42jsv" [dd9c8252-8f72-4a3c-93c2-687ecaa2f207] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-42jsv" [dd9c8252-8f72-4a3c-93c2-687ecaa2f207] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.023705908s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (114.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-591320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m54.736312684s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (114.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wb84w" [f4bbfec2-2b42-4f62-b0be-5b2e6f99c9a7] Running
E0912 19:17:13.873147   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:13.878466   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:13.888821   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:13.909134   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:13.949432   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:14.030632   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:14.122701   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/ingress-addon-legacy-780835/client.crt: no such file or directory
E0912 19:17:14.190949   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:14.511794   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:15.152254   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:16.432541   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.025388831s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-591320 replace --force -f testdata/netcat-deployment.yaml
E0912 19:17:18.993586   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
net_test.go:149: (dbg) Done: kubectl --context flannel-591320 replace --force -f testdata/netcat-deployment.yaml: (1.784971636s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9mqlw" [78b92aff-efcb-4c5c-b2b8-cdeb5ad497c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 19:17:24.114752   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
E0912 19:17:27.379284   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/no-preload-298911/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9mqlw" [78b92aff-efcb-4c5c-b2b8-cdeb5ad497c6] Running
E0912 19:17:30.884584   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.048437674s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-226p2" [985dc4ec-674b-4ceb-8990-c98d22c6c82c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 19:17:34.355339   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/default-k8s-diff-port-795683/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-226p2" [985dc4ec-674b-4ceb-8990-c98d22c6c82c] Running
E0912 19:17:43.388790   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/old-k8s-version-295705/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.016802423s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6pbm9" [5fc7959a-5565-456a-a7eb-950b5656e0f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6pbm9" [5fc7959a-5565-456a-a7eb-950b5656e0f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.012671387s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-591320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-591320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5tbqr" [66fb9461-db42-4b5a-af22-48f8aea2a7d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5tbqr" [66fb9461-db42-4b5a-af22-48f8aea2a7d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.012456261s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-591320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-591320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    

Test skip (31/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-339221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-339221
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0912 19:02:30.884583   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:30.889907   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:30.900327   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:30.920628   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:30.961010   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:31.041116   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:31.201819   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:31.522564   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:32.162988   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
E0912 19:02:33.444179   10848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/gvisor-875236/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: cilium-591320 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-591320" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17233-3674/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:02:32 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.72.250:8443
name: cert-expiration-161295
contexts:
- context:
cluster: cert-expiration-161295
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:02:32 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-161295
name: cert-expiration-161295
current-context: cert-expiration-161295
kind: Config
preferences: {}
users:
- name: cert-expiration-161295
user:
client-certificate: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/cert-expiration-161295/client.crt
client-key: /home/jenkins/minikube-integration/17233-3674/.minikube/profiles/cert-expiration-161295/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-591320

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-591320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-591320"

                                                
                                                
----------------------- debugLogs end: cilium-591320 [took: 3.661523004s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-591320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-591320
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
Copied to clipboard