Test Report: KVM_Linux 17044

                    
                      df168c2d81a1825740328057ca29cb976d1a3614:2023-08-12:30542
                    
                

Test fail (2/320)

Order failed test Duration
221 TestMultiNode/serial/RestartKeepsNodes 159.8
276 TestNoKubernetes/serial/ProfileList 139.03
x
+
TestMultiNode/serial/RestartKeepsNodes (159.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-618164
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-618164
E0811 23:23:29.911817   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-618164: (28.499452073s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-618164 --wait=true -v=8 --alsologtostderr
E0811 23:23:51.338692   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:23:57.597596   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:24:51.067389   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:25:14.384498   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-618164 --wait=true -v=8 --alsologtostderr: exit status 90 (2m8.267833338s)

                                                
                                                
-- stdout --
	* [multinode-618164] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-618164 in cluster multinode-618164
	* Restarting existing kvm2 VM for "multinode-618164" ...
	* Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-618164-m02 in cluster multinode-618164
	* Restarting existing kvm2 VM for "multinode-618164-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.6
	* Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	  - env NO_PROXY=192.168.39.6
	* Verifying Kubernetes components...
	* Starting worker node multinode-618164-m03 in cluster multinode-618164
	* Restarting existing kvm2 VM for "multinode-618164-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.6,192.168.39.254
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:23:32.722173   32156 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:23:32.722281   32156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:23:32.722294   32156 out.go:309] Setting ErrFile to fd 2...
	I0811 23:23:32.722298   32156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:23:32.722512   32156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	I0811 23:23:32.723027   32156 out.go:303] Setting JSON to false
	I0811 23:23:32.723899   32156 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3967,"bootTime":1691792246,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:23:32.723952   32156 start.go:138] virtualization: kvm guest
	I0811 23:23:32.727353   32156 out.go:177] * [multinode-618164] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0811 23:23:32.729163   32156 notify.go:220] Checking for updates...
	I0811 23:23:32.729177   32156 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:23:32.730904   32156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:23:32.732568   32156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:23:32.734361   32156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:23:32.735936   32156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0811 23:23:32.737453   32156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:23:32.739333   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:23:32.739432   32156 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:23:32.739796   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:23:32.739843   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:23:32.753729   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45667
	I0811 23:23:32.754115   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:23:32.754720   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:23:32.754748   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:23:32.755035   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:23:32.755226   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:32.789495   32156 out.go:177] * Using the kvm2 driver based on existing profile
	I0811 23:23:32.791161   32156 start.go:298] selected driver: kvm2
	I0811 23:23:32.791189   32156 start.go:901] validating driver "kvm2" against &{Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:23:32.791301   32156 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:23:32.791591   32156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:23:32.791655   32156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17044-9593/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0811 23:23:32.806055   32156 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.1
	I0811 23:23:32.806713   32156 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 23:23:32.806758   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:23:32.806766   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:23:32.806777   32156 start_flags.go:319] config:
	{Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0811 23:23:32.806969   32156 iso.go:125] acquiring lock: {Name:mkbb435ea885d9d203ce0113f8005e4b53bc59ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:23:32.808986   32156 out.go:177] * Starting control plane node multinode-618164 in cluster multinode-618164
	I0811 23:23:32.810394   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:23:32.810441   32156 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4
	I0811 23:23:32.810460   32156 cache.go:57] Caching tarball of preloaded images
	I0811 23:23:32.810544   32156 preload.go:174] Found /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0811 23:23:32.810557   32156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0811 23:23:32.810731   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:23:32.810951   32156 start.go:365] acquiring machines lock for multinode-618164: {Name:mk5e6cee1d1e9195cd61b1fff8d9384d7220567d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0811 23:23:32.811005   32156 start.go:369] acquired machines lock for "multinode-618164" in 32.003µs
	I0811 23:23:32.811026   32156 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:23:32.811039   32156 fix.go:54] fixHost starting: 
	I0811 23:23:32.811341   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:23:32.811377   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:23:32.825189   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0811 23:23:32.825607   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:23:32.826297   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:23:32.826317   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:23:32.826651   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:23:32.826809   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:32.826933   32156 main.go:141] libmachine: (multinode-618164) Calling .GetState
	I0811 23:23:32.828364   32156 fix.go:102] recreateIfNeeded on multinode-618164: state=Stopped err=<nil>
	I0811 23:23:32.828391   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	W0811 23:23:32.828569   32156 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:23:32.831881   32156 out.go:177] * Restarting existing kvm2 VM for "multinode-618164" ...
	I0811 23:23:32.833637   32156 main.go:141] libmachine: (multinode-618164) Calling .Start
	I0811 23:23:32.833821   32156 main.go:141] libmachine: (multinode-618164) Ensuring networks are active...
	I0811 23:23:32.834601   32156 main.go:141] libmachine: (multinode-618164) Ensuring network default is active
	I0811 23:23:32.834951   32156 main.go:141] libmachine: (multinode-618164) Ensuring network mk-multinode-618164 is active
	I0811 23:23:32.835359   32156 main.go:141] libmachine: (multinode-618164) Getting domain xml...
	I0811 23:23:32.836112   32156 main.go:141] libmachine: (multinode-618164) Creating domain...
	I0811 23:23:34.036415   32156 main.go:141] libmachine: (multinode-618164) Waiting to get IP...
	I0811 23:23:34.037475   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.037865   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.037955   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.037865   32185 retry.go:31] will retry after 250.674646ms: waiting for machine to come up
	I0811 23:23:34.290585   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.291097   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.291139   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.291053   32185 retry.go:31] will retry after 298.664709ms: waiting for machine to come up
	I0811 23:23:34.591686   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.592087   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.592116   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.592054   32185 retry.go:31] will retry after 344.854456ms: waiting for machine to come up
	I0811 23:23:34.938436   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.938925   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.938950   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.938853   32185 retry.go:31] will retry after 465.356896ms: waiting for machine to come up
	I0811 23:23:35.405439   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:35.405855   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:35.405876   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:35.405839   32185 retry.go:31] will retry after 468.026827ms: waiting for machine to come up
	I0811 23:23:35.874905   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:35.875325   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:35.875355   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:35.875269   32185 retry.go:31] will retry after 688.85699ms: waiting for machine to come up
	I0811 23:23:36.566140   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:36.566553   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:36.566584   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:36.566507   32185 retry.go:31] will retry after 978.359324ms: waiting for machine to come up
	I0811 23:23:37.546660   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:37.547122   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:37.547151   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:37.547050   32185 retry.go:31] will retry after 1.294102807s: waiting for machine to come up
	I0811 23:23:38.842673   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:38.843078   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:38.843112   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:38.843031   32185 retry.go:31] will retry after 1.213055571s: waiting for machine to come up
	I0811 23:23:40.058237   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:40.058595   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:40.058619   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:40.058554   32185 retry.go:31] will retry after 1.75151759s: waiting for machine to come up
	I0811 23:23:41.812537   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:41.812837   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:41.812873   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:41.812810   32185 retry.go:31] will retry after 1.77396365s: waiting for machine to come up
	I0811 23:23:43.588031   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:43.588526   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:43.588569   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:43.588493   32185 retry.go:31] will retry after 3.271610328s: waiting for machine to come up
	I0811 23:23:46.863065   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:46.863556   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:46.863579   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:46.863520   32185 retry.go:31] will retry after 4.415362505s: waiting for machine to come up
	I0811 23:23:51.283014   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.283528   32156 main.go:141] libmachine: (multinode-618164) Found IP for machine: 192.168.39.6
	I0811 23:23:51.283574   32156 main.go:141] libmachine: (multinode-618164) Reserving static IP address...
	I0811 23:23:51.283610   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has current primary IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.283984   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "multinode-618164", mac: "52:54:00:ac:97:b5", ip: "192.168.39.6"} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.284028   32156 main.go:141] libmachine: (multinode-618164) DBG | skip adding static IP to network mk-multinode-618164 - found existing host DHCP lease matching {name: "multinode-618164", mac: "52:54:00:ac:97:b5", ip: "192.168.39.6"}
	I0811 23:23:51.284039   32156 main.go:141] libmachine: (multinode-618164) Reserved static IP address: 192.168.39.6
	I0811 23:23:51.284051   32156 main.go:141] libmachine: (multinode-618164) DBG | Getting to WaitForSSH function...
	I0811 23:23:51.284075   32156 main.go:141] libmachine: (multinode-618164) Waiting for SSH to be available...
	I0811 23:23:51.285884   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.286217   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.286255   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.286353   32156 main.go:141] libmachine: (multinode-618164) DBG | Using SSH client type: external
	I0811 23:23:51.286384   32156 main.go:141] libmachine: (multinode-618164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa (-rw-------)
	I0811 23:23:51.286417   32156 main.go:141] libmachine: (multinode-618164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0811 23:23:51.286428   32156 main.go:141] libmachine: (multinode-618164) DBG | About to run SSH command:
	I0811 23:23:51.286436   32156 main.go:141] libmachine: (multinode-618164) DBG | exit 0
	I0811 23:23:51.379359   32156 main.go:141] libmachine: (multinode-618164) DBG | SSH cmd err, output: <nil>: 
	I0811 23:23:51.379772   32156 main.go:141] libmachine: (multinode-618164) Calling .GetConfigRaw
	I0811 23:23:51.380347   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:51.382832   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.383264   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.383303   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.383597   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:23:51.383766   32156 machine.go:88] provisioning docker machine ...
	I0811 23:23:51.383780   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:51.383996   32156 main.go:141] libmachine: (multinode-618164) Calling .GetMachineName
	I0811 23:23:51.384173   32156 buildroot.go:166] provisioning hostname "multinode-618164"
	I0811 23:23:51.384192   32156 main.go:141] libmachine: (multinode-618164) Calling .GetMachineName
	I0811 23:23:51.384352   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.386674   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.387064   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.387095   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.387262   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.387423   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.387565   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.387682   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.387844   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:51.388302   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:51.388324   32156 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-618164 && echo "multinode-618164" | sudo tee /etc/hostname
	I0811 23:23:51.520050   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-618164
	
	I0811 23:23:51.520086   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.523082   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.523564   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.523595   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.523715   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.523934   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.524094   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.524268   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.524454   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:51.524834   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:51.524851   32156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-618164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-618164/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-618164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:23:51.657368   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:23:51.657397   32156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17044-9593/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-9593/.minikube}
	I0811 23:23:51.657452   32156 buildroot.go:174] setting up certificates
	I0811 23:23:51.657467   32156 provision.go:83] configureAuth start
	I0811 23:23:51.657480   32156 main.go:141] libmachine: (multinode-618164) Calling .GetMachineName
	I0811 23:23:51.657779   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:51.660466   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.660823   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.660855   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.661021   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.663049   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.663440   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.663476   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.663588   32156 provision.go:138] copyHostCerts
	I0811 23:23:51.663629   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:23:51.663671   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem, removing ...
	I0811 23:23:51.663680   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:23:51.663763   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem (1078 bytes)
	I0811 23:23:51.663874   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:23:51.663900   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem, removing ...
	I0811 23:23:51.663907   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:23:51.663950   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem (1123 bytes)
	I0811 23:23:51.664023   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:23:51.664045   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem, removing ...
	I0811 23:23:51.664050   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:23:51.664084   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem (1675 bytes)
	I0811 23:23:51.664157   32156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem org=jenkins.multinode-618164 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube multinode-618164]
	I0811 23:23:51.759895   32156 provision.go:172] copyRemoteCerts
	I0811 23:23:51.759959   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:23:51.759985   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.762635   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.762991   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.763026   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.763290   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.763487   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.763674   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.763847   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:51.852641   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:23:51.852720   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0811 23:23:51.878843   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:23:51.878911   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0811 23:23:51.904738   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:23:51.904819   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:23:51.930193   32156 provision.go:86] duration metric: configureAuth took 272.712825ms
	I0811 23:23:51.930229   32156 buildroot.go:189] setting minikube options for container-runtime
	I0811 23:23:51.930438   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:23:51.930521   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:51.930793   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.933463   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.933835   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.933860   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.934016   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.934192   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.934362   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.934543   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.934740   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:51.935138   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:51.935152   32156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 23:23:52.056995   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0811 23:23:52.057017   32156 buildroot.go:70] root file system type: tmpfs
	I0811 23:23:52.057140   32156 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 23:23:52.057163   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:52.060121   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.060522   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:52.060557   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.060692   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:52.060900   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.061113   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.061313   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:52.061520   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:52.062103   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:52.062200   32156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 23:23:52.195958   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 23:23:52.195988   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:52.198688   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.199053   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:52.199074   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.199282   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:52.199470   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.199636   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.199779   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:52.199906   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:52.200284   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:52.200307   32156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 23:23:53.071780   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0811 23:23:53.071817   32156 machine.go:91] provisioned docker machine in 1.688040811s
	I0811 23:23:53.071826   32156 start.go:300] post-start starting for "multinode-618164" (driver="kvm2")
	I0811 23:23:53.071834   32156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:23:53.071853   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.072202   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:23:53.072224   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.074823   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.075153   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.075186   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.075316   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.075502   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.075638   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.075760   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:53.164782   32156 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:23:53.168913   32156 command_runner.go:130] > NAME=Buildroot
	I0811 23:23:53.168930   32156 command_runner.go:130] > VERSION=2021.02.12-1-gb58903a-dirty
	I0811 23:23:53.168936   32156 command_runner.go:130] > ID=buildroot
	I0811 23:23:53.168944   32156 command_runner.go:130] > VERSION_ID=2021.02.12
	I0811 23:23:53.168950   32156 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0811 23:23:53.168984   32156 info.go:137] Remote host: Buildroot 2021.02.12
	I0811 23:23:53.168997   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/addons for local assets ...
	I0811 23:23:53.169057   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/files for local assets ...
	I0811 23:23:53.169150   32156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> 168362.pem in /etc/ssl/certs
	I0811 23:23:53.169164   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /etc/ssl/certs/168362.pem
	I0811 23:23:53.169262   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:23:53.177591   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:23:53.200087   32156 start.go:303] post-start completed in 128.247996ms
	I0811 23:23:53.200108   32156 fix.go:56] fixHost completed within 20.389073179s
	I0811 23:23:53.200136   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.203019   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.203417   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.203444   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.203600   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.203829   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.204071   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.204251   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.204461   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:53.204868   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:53.204884   32156 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0811 23:23:53.328309   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691796233.277880091
	
	I0811 23:23:53.328348   32156 fix.go:206] guest clock: 1691796233.277880091
	I0811 23:23:53.328355   32156 fix.go:219] Guest: 2023-08-11 23:23:53.277880091 +0000 UTC Remote: 2023-08-11 23:23:53.20011316 +0000 UTC m=+20.510323801 (delta=77.766931ms)
	I0811 23:23:53.328381   32156 fix.go:190] guest clock delta is within tolerance: 77.766931ms
	I0811 23:23:53.328386   32156 start.go:83] releasing machines lock for "multinode-618164", held for 20.517369844s
	I0811 23:23:53.328407   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.328685   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:53.331421   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.331764   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.331792   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.331943   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.332514   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.332699   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.332775   32156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:23:53.332823   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.332956   32156 ssh_runner.go:195] Run: cat /version.json
	I0811 23:23:53.332982   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.335410   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335468   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335829   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.335869   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.335888   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335911   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335979   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.336078   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.336152   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.336208   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.336358   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.336359   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.336526   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:53.336539   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:53.419787   32156 command_runner.go:130] > {"iso_version": "v1.31.0-1690838458-16971", "kicbase_version": "v0.0.40-1690799191-16971", "minikube_version": "v1.31.1", "commit": "29dfb44a8786625102cff167b7adaa8f8ef2d500"}
	I0811 23:23:53.419957   32156 ssh_runner.go:195] Run: systemctl --version
	I0811 23:23:53.446524   32156 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:23:53.446577   32156 command_runner.go:130] > systemd 247 (247)
	I0811 23:23:53.446602   32156 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0811 23:23:53.446675   32156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:23:53.452149   32156 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0811 23:23:53.452181   32156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0811 23:23:53.452244   32156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:23:53.467021   32156 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0811 23:23:53.467055   32156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:23:53.467067   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:23:53.467195   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:23:53.482878   32156 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0811 23:23:53.483267   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0811 23:23:53.492232   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0811 23:23:53.501066   32156 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0811 23:23:53.501126   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0811 23:23:53.510089   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:23:53.519076   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0811 23:23:53.528146   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:23:53.537240   32156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:23:53.546612   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0811 23:23:53.555662   32156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:23:53.563978   32156 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:23:53.564054   32156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:23:53.572317   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:53.671843   32156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0811 23:23:53.687731   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:23:53.687811   32156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0811 23:23:53.702064   32156 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0811 23:23:53.702088   32156 command_runner.go:130] > [Unit]
	I0811 23:23:53.702099   32156 command_runner.go:130] > Description=Docker Application Container Engine
	I0811 23:23:53.702108   32156 command_runner.go:130] > Documentation=https://docs.docker.com
	I0811 23:23:53.702116   32156 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0811 23:23:53.702121   32156 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0811 23:23:53.702127   32156 command_runner.go:130] > StartLimitBurst=3
	I0811 23:23:53.702133   32156 command_runner.go:130] > StartLimitIntervalSec=60
	I0811 23:23:53.702139   32156 command_runner.go:130] > [Service]
	I0811 23:23:53.702145   32156 command_runner.go:130] > Type=notify
	I0811 23:23:53.702150   32156 command_runner.go:130] > Restart=on-failure
	I0811 23:23:53.702167   32156 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 23:23:53.702181   32156 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 23:23:53.702197   32156 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 23:23:53.702210   32156 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0811 23:23:53.702224   32156 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 23:23:53.702239   32156 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 23:23:53.702257   32156 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 23:23:53.702275   32156 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 23:23:53.702289   32156 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 23:23:53.702297   32156 command_runner.go:130] > ExecStart=
	I0811 23:23:53.702326   32156 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0811 23:23:53.702341   32156 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 23:23:53.702354   32156 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 23:23:53.702368   32156 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 23:23:53.702378   32156 command_runner.go:130] > LimitNOFILE=infinity
	I0811 23:23:53.702387   32156 command_runner.go:130] > LimitNPROC=infinity
	I0811 23:23:53.702397   32156 command_runner.go:130] > LimitCORE=infinity
	I0811 23:23:53.702409   32156 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0811 23:23:53.702417   32156 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0811 23:23:53.702426   32156 command_runner.go:130] > TasksMax=infinity
	I0811 23:23:53.702436   32156 command_runner.go:130] > TimeoutStartSec=0
	I0811 23:23:53.702451   32156 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 23:23:53.702462   32156 command_runner.go:130] > Delegate=yes
	I0811 23:23:53.702475   32156 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0811 23:23:53.702485   32156 command_runner.go:130] > KillMode=process
	I0811 23:23:53.702491   32156 command_runner.go:130] > [Install]
	I0811 23:23:53.702502   32156 command_runner.go:130] > WantedBy=multi-user.target
	I0811 23:23:53.702568   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:23:53.717114   32156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:23:53.733138   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:23:53.744968   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:23:53.756483   32156 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0811 23:23:53.783422   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:23:53.795905   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:23:53.812681   32156 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0811 23:23:53.813092   32156 ssh_runner.go:195] Run: which cri-dockerd
	I0811 23:23:53.816651   32156 command_runner.go:130] > /usr/bin/cri-dockerd
	I0811 23:23:53.816744   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0811 23:23:53.825824   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0811 23:23:53.841306   32156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0811 23:23:53.953526   32156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0811 23:23:54.065429   32156 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0811 23:23:54.065459   32156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0811 23:23:54.081837   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:54.182796   32156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0811 23:23:55.651015   32156 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.468179659s)
	I0811 23:23:55.651079   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:23:55.766938   32156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0811 23:23:55.867537   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:23:55.971821   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:56.072586   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0811 23:23:56.093196   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:56.223201   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0811 23:23:56.306062   32156 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0811 23:23:56.306131   32156 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0811 23:23:56.311586   32156 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0811 23:23:56.311612   32156 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0811 23:23:56.311621   32156 command_runner.go:130] > Device: 16h/22d	Inode: 861         Links: 1
	I0811 23:23:56.311631   32156 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0811 23:23:56.311640   32156 command_runner.go:130] > Access: 2023-08-11 23:23:56.189456579 +0000
	I0811 23:23:56.311648   32156 command_runner.go:130] > Modify: 2023-08-11 23:23:56.189456579 +0000
	I0811 23:23:56.311660   32156 command_runner.go:130] > Change: 2023-08-11 23:23:56.192456579 +0000
	I0811 23:23:56.311665   32156 command_runner.go:130] >  Birth: -
	I0811 23:23:56.311689   32156 start.go:534] Will wait 60s for crictl version
	I0811 23:23:56.311738   32156 ssh_runner.go:195] Run: which crictl
	I0811 23:23:56.316016   32156 command_runner.go:130] > /usr/bin/crictl
	I0811 23:23:56.316082   32156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:23:56.352987   32156 command_runner.go:130] > Version:  0.1.0
	I0811 23:23:56.353012   32156 command_runner.go:130] > RuntimeName:  docker
	I0811 23:23:56.353017   32156 command_runner.go:130] > RuntimeVersion:  24.0.4
	I0811 23:23:56.353022   32156 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0811 23:23:56.354461   32156 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0811 23:23:56.354520   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:23:56.380173   32156 command_runner.go:130] > 24.0.4
	I0811 23:23:56.381466   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:23:56.408078   32156 command_runner.go:130] > 24.0.4
	I0811 23:23:56.411512   32156 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0811 23:23:56.411562   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:56.414352   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:56.414801   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:56.414834   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:56.415056   32156 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0811 23:23:56.419160   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:23:56.431250   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:23:56.431297   32156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 23:23:56.450313   32156 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.4
	I0811 23:23:56.450330   32156 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.4
	I0811 23:23:56.450342   32156 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.4
	I0811 23:23:56.450349   32156 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.4
	I0811 23:23:56.450353   32156 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0811 23:23:56.450357   32156 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0811 23:23:56.450362   32156 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0811 23:23:56.450372   32156 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0811 23:23:56.450377   32156 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:23:56.450381   32156 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0811 23:23:56.451350   32156 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0811 23:23:56.451375   32156 docker.go:566] Images already preloaded, skipping extraction
	I0811 23:23:56.451416   32156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 23:23:56.469960   32156 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.4
	I0811 23:23:56.469975   32156 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.4
	I0811 23:23:56.469981   32156 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.4
	I0811 23:23:56.469986   32156 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.4
	I0811 23:23:56.469996   32156 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0811 23:23:56.470001   32156 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0811 23:23:56.470006   32156 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0811 23:23:56.470010   32156 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0811 23:23:56.470014   32156 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:23:56.470022   32156 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0811 23:23:56.470942   32156 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0811 23:23:56.470975   32156 cache_images.go:84] Images are preloaded, skipping loading
	I0811 23:23:56.471028   32156 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0811 23:23:56.497884   32156 command_runner.go:130] > cgroupfs
	I0811 23:23:56.498015   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:23:56.498032   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:23:56.498040   32156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:23:56.498061   32156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-618164 NodeName:multinode-618164 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:23:56.498205   32156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-618164"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:23:56.498267   32156 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-618164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:23:56.498312   32156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:23:56.508130   32156 command_runner.go:130] > kubeadm
	I0811 23:23:56.508150   32156 command_runner.go:130] > kubectl
	I0811 23:23:56.508156   32156 command_runner.go:130] > kubelet
	I0811 23:23:56.508307   32156 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:23:56.508367   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 23:23:56.517170   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0811 23:23:56.532821   32156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:23:56.548306   32156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0811 23:23:56.566152   32156 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0811 23:23:56.570221   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:23:56.582186   32156 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164 for IP: 192.168.39.6
	I0811 23:23:56.582217   32156 certs.go:190] acquiring lock for shared ca certs: {Name:mke12ed30faa4458f68c7f1069767b7834c8a1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:23:56.582354   32156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key
	I0811 23:23:56.582418   32156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key
	I0811 23:23:56.582498   32156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key
	I0811 23:23:56.582583   32156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.key.cc3bd7a5
	I0811 23:23:56.582638   32156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.key
	I0811 23:23:56.582652   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0811 23:23:56.582678   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0811 23:23:56.582699   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0811 23:23:56.582718   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0811 23:23:56.582736   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:23:56.582754   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:23:56.582772   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:23:56.582789   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:23:56.582856   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem (1338 bytes)
	W0811 23:23:56.582894   32156 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836_empty.pem, impossibly tiny 0 bytes
	I0811 23:23:56.582909   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem (1679 bytes)
	I0811 23:23:56.582947   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem (1078 bytes)
	I0811 23:23:56.582983   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:23:56.583016   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem (1675 bytes)
	I0811 23:23:56.583070   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:23:56.583127   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.583147   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem -> /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.583166   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.583678   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 23:23:56.609836   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 23:23:56.633914   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 23:23:56.659924   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0811 23:23:56.684037   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:23:56.707211   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 23:23:56.732529   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:23:56.756471   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0811 23:23:56.781148   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:23:56.805144   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem --> /usr/share/ca-certificates/16836.pem (1338 bytes)
	I0811 23:23:56.829103   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /usr/share/ca-certificates/168362.pem (1708 bytes)
	I0811 23:23:56.852875   32156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 23:23:56.870933   32156 ssh_runner.go:195] Run: openssl version
	I0811 23:23:56.876297   32156 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0811 23:23:56.876562   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:23:56.888670   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.893257   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.893511   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.893558   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.898906   32156 command_runner.go:130] > b5213941
	I0811 23:23:56.899091   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:23:56.910898   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16836.pem && ln -fs /usr/share/ca-certificates/16836.pem /etc/ssl/certs/16836.pem"
	I0811 23:23:56.922490   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.927389   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.927416   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.927458   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.933404   32156 command_runner.go:130] > 51391683
	I0811 23:23:56.933456   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16836.pem /etc/ssl/certs/51391683.0"
	I0811 23:23:56.945430   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168362.pem && ln -fs /usr/share/ca-certificates/168362.pem /etc/ssl/certs/168362.pem"
	I0811 23:23:56.957473   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.962297   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.962400   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.962441   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.967962   32156 command_runner.go:130] > 3ec20f2e
	I0811 23:23:56.968147   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168362.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:23:56.980192   32156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:23:56.984658   32156 command_runner.go:130] > ca.crt
	I0811 23:23:56.984671   32156 command_runner.go:130] > ca.key
	I0811 23:23:56.984681   32156 command_runner.go:130] > healthcheck-client.crt
	I0811 23:23:56.984685   32156 command_runner.go:130] > healthcheck-client.key
	I0811 23:23:56.984689   32156 command_runner.go:130] > peer.crt
	I0811 23:23:56.984693   32156 command_runner.go:130] > peer.key
	I0811 23:23:56.984696   32156 command_runner.go:130] > server.crt
	I0811 23:23:56.984700   32156 command_runner.go:130] > server.key
	I0811 23:23:56.985037   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0811 23:23:56.990675   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:56.990998   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0811 23:23:56.996756   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:56.997039   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0811 23:23:57.002784   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.002849   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0811 23:23:57.008397   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.008693   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0811 23:23:57.014226   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.014501   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0811 23:23:57.020206   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.020384   32156 kubeadm.go:404] StartCluster: {Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:23:57.020523   32156 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 23:23:57.043601   32156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 23:23:57.055163   32156 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0811 23:23:57.055181   32156 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0811 23:23:57.055187   32156 command_runner.go:130] > /var/lib/minikube/etcd:
	I0811 23:23:57.055190   32156 command_runner.go:130] > member
	I0811 23:23:57.055397   32156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0811 23:23:57.055414   32156 kubeadm.go:636] restartCluster start
	I0811 23:23:57.055461   32156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0811 23:23:57.066155   32156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:57.066667   32156 kubeconfig.go:135] verify returned: extract IP: "multinode-618164" does not appear in /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:23:57.066795   32156 kubeconfig.go:146] "multinode-618164" context is missing from /home/jenkins/minikube-integration/17044-9593/kubeconfig - will repair!
	I0811 23:23:57.067123   32156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-9593/kubeconfig: {Name:mk5d0cc13acd7d86edf0e41f0198b0f7dd85af9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:23:57.067520   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:23:57.067745   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:23:57.068588   32156 cert_rotation.go:137] Starting client certificate rotation controller
	I0811 23:23:57.068768   32156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0811 23:23:57.079070   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:57.079121   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:57.092235   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:57.092251   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:57.092291   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:57.104916   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:57.605643   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:57.605826   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:57.618255   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:58.104969   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:58.105071   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:58.117713   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:58.605244   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:58.605323   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:58.617825   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:59.105371   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:59.105448   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:59.118693   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:59.605213   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:59.605293   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:59.617820   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:00.105361   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:00.105458   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:00.118242   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:00.605942   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:00.606023   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:00.618784   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:01.105318   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:01.105397   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:01.118306   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:01.605916   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:01.605980   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:01.618363   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:02.106046   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:02.106125   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:02.118972   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:02.605633   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:02.605720   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:02.618271   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:03.105628   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:03.105699   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:03.118269   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:03.605900   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:03.605997   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:03.618729   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:04.105270   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:04.105338   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:04.118027   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:04.605749   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:04.605833   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:04.618161   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:05.105826   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:05.105910   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:05.118139   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:05.605768   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:05.605857   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:05.617839   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:06.105373   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:06.105458   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:06.117905   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:06.605475   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:06.605560   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:06.618008   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:07.079788   32156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0811 23:24:07.079816   32156 kubeadm.go:1128] stopping kube-system containers ...
	I0811 23:24:07.079870   32156 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 23:24:07.101442   32156 command_runner.go:130] > e5175209bd61
	I0811 23:24:07.101457   32156 command_runner.go:130] > 5bb51d1cc942
	I0811 23:24:07.101461   32156 command_runner.go:130] > 92137e4b2bde
	I0811 23:24:07.101465   32156 command_runner.go:130] > 5b35741c12db
	I0811 23:24:07.101469   32156 command_runner.go:130] > feef63247dc8
	I0811 23:24:07.101473   32156 command_runner.go:130] > c0158a6605ea
	I0811 23:24:07.101476   32156 command_runner.go:130] > 53769ace7d8f
	I0811 23:24:07.101480   32156 command_runner.go:130] > c453bb965128
	I0811 23:24:07.101485   32156 command_runner.go:130] > ef74cd56c60d
	I0811 23:24:07.101491   32156 command_runner.go:130] > a3429cc90df2
	I0811 23:24:07.101496   32156 command_runner.go:130] > 2965fda37c07
	I0811 23:24:07.101502   32156 command_runner.go:130] > 5f9d39ea2d1f
	I0811 23:24:07.101509   32156 command_runner.go:130] > e102c9cb8b46
	I0811 23:24:07.101515   32156 command_runner.go:130] > 208f3b4c3f22
	I0811 23:24:07.101530   32156 command_runner.go:130] > 609eb0503045
	I0811 23:24:07.101536   32156 command_runner.go:130] > 5db82ba10c90
	I0811 23:24:07.102528   32156 docker.go:462] Stopping containers: [e5175209bd61 5bb51d1cc942 92137e4b2bde 5b35741c12db feef63247dc8 c0158a6605ea 53769ace7d8f c453bb965128 ef74cd56c60d a3429cc90df2 2965fda37c07 5f9d39ea2d1f e102c9cb8b46 208f3b4c3f22 609eb0503045 5db82ba10c90]
	I0811 23:24:07.102587   32156 ssh_runner.go:195] Run: docker stop e5175209bd61 5bb51d1cc942 92137e4b2bde 5b35741c12db feef63247dc8 c0158a6605ea 53769ace7d8f c453bb965128 ef74cd56c60d a3429cc90df2 2965fda37c07 5f9d39ea2d1f e102c9cb8b46 208f3b4c3f22 609eb0503045 5db82ba10c90
	I0811 23:24:07.120025   32156 command_runner.go:130] > e5175209bd61
	I0811 23:24:07.120046   32156 command_runner.go:130] > 5bb51d1cc942
	I0811 23:24:07.120726   32156 command_runner.go:130] > 92137e4b2bde
	I0811 23:24:07.120868   32156 command_runner.go:130] > 5b35741c12db
	I0811 23:24:07.120883   32156 command_runner.go:130] > feef63247dc8
	I0811 23:24:07.121133   32156 command_runner.go:130] > c0158a6605ea
	I0811 23:24:07.121319   32156 command_runner.go:130] > 53769ace7d8f
	I0811 23:24:07.121596   32156 command_runner.go:130] > c453bb965128
	I0811 23:24:07.121776   32156 command_runner.go:130] > ef74cd56c60d
	I0811 23:24:07.123023   32156 command_runner.go:130] > a3429cc90df2
	I0811 23:24:07.123166   32156 command_runner.go:130] > 2965fda37c07
	I0811 23:24:07.123179   32156 command_runner.go:130] > 5f9d39ea2d1f
	I0811 23:24:07.123186   32156 command_runner.go:130] > e102c9cb8b46
	I0811 23:24:07.123192   32156 command_runner.go:130] > 208f3b4c3f22
	I0811 23:24:07.123198   32156 command_runner.go:130] > 609eb0503045
	I0811 23:24:07.123205   32156 command_runner.go:130] > 5db82ba10c90
	I0811 23:24:07.124644   32156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0811 23:24:07.141077   32156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 23:24:07.150449   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0811 23:24:07.150465   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0811 23:24:07.150472   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0811 23:24:07.150478   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:24:07.150553   32156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:24:07.150600   32156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 23:24:07.160111   32156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0811 23:24:07.160148   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:07.276942   32156 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 23:24:07.277335   32156 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0811 23:24:07.277811   32156 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0811 23:24:07.278282   32156 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0811 23:24:07.279541   32156 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0811 23:24:07.280002   32156 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0811 23:24:07.280839   32156 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0811 23:24:07.281293   32156 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0811 23:24:07.281771   32156 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0811 23:24:07.282196   32156 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0811 23:24:07.282627   32156 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0811 23:24:07.284468   32156 command_runner.go:130] > [certs] Using the existing "sa" key
	I0811 23:24:07.284530   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.060026   32156 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 23:24:08.060052   32156 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 23:24:08.060065   32156 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 23:24:08.060074   32156 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 23:24:08.060084   32156 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 23:24:08.060113   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.130867   32156 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:24:08.133320   32156 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:24:08.133411   32156 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0811 23:24:08.254043   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.356243   32156 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 23:24:08.356264   32156 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 23:24:08.356270   32156 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 23:24:08.356277   32156 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 23:24:08.356408   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.432444   32156 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 23:24:08.446083   32156 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:24:08.446163   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:08.457920   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:08.973444   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:09.473768   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:09.973608   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:10.473625   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:10.523079   32156 command_runner.go:130] > 1697
	I0811 23:24:10.523142   32156 api_server.go:72] duration metric: took 2.077063631s to wait for apiserver process to appear ...
	I0811 23:24:10.523153   32156 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:24:10.523169   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:10.523707   32156 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I0811 23:24:10.523743   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:10.524067   32156 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I0811 23:24:11.024917   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:15.146509   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 23:24:15.146543   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 23:24:15.146557   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:15.162963   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 23:24:15.162989   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 23:24:15.524452   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:15.529914   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:24:15.529939   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:24:16.024527   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:16.030080   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:24:16.030104   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:24:16.524699   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:16.529920   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0811 23:24:16.529982   32156 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0811 23:24:16.529987   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:16.529995   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:16.530004   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:16.543593   32156 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0811 23:24:16.543621   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:16.543632   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:16 GMT
	I0811 23:24:16.543641   32156 round_trippers.go:580]     Audit-Id: 598ce2af-61b4-4aee-b059-0721d25a0c30
	I0811 23:24:16.543649   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:16.543658   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:16.543665   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:16.543673   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:16.543696   32156 round_trippers.go:580]     Content-Length: 263
	I0811 23:24:16.543957   32156 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0811 23:24:16.544064   32156 api_server.go:141] control plane version: v1.27.4
	I0811 23:24:16.544088   32156 api_server.go:131] duration metric: took 6.020928424s to wait for apiserver health ...
	I0811 23:24:16.544099   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:24:16.544116   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:24:16.546067   32156 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 23:24:16.547723   32156 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:24:16.556631   32156 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0811 23:24:16.556655   32156 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0811 23:24:16.556665   32156 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0811 23:24:16.556686   32156 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:24:16.556698   32156 command_runner.go:130] > Access: 2023-08-11 23:23:45.638456579 +0000
	I0811 23:24:16.556707   32156 command_runner.go:130] > Modify: 2023-08-01 03:01:17.000000000 +0000
	I0811 23:24:16.556715   32156 command_runner.go:130] > Change: 2023-08-11 23:23:43.758456579 +0000
	I0811 23:24:16.556724   32156 command_runner.go:130] >  Birth: -
	I0811 23:24:16.556941   32156 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:24:16.556958   32156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:24:16.582212   32156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:24:18.035856   32156 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:24:18.035881   32156 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:24:18.035892   32156 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0811 23:24:18.035896   32156 command_runner.go:130] > daemonset.apps/kindnet configured
	I0811 23:24:18.035913   32156 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.453677621s)
	I0811 23:24:18.035931   32156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:24:18.036017   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:18.036074   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.036089   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.036095   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.040676   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:18.040699   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.040710   32156 round_trippers.go:580]     Audit-Id: 4df18645-655d-4a79-8469-4caba2b1ee9d
	I0811 23:24:18.040731   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.040745   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.040751   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.040759   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.040765   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:17 GMT
	I0811 23:24:18.041909   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"832"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84507 chars]
	I0811 23:24:18.045998   32156 system_pods.go:59] 12 kube-system pods found
	I0811 23:24:18.046031   32156 system_pods.go:61] "coredns-5d78c9869d-zrmf9" [c3c83ae1-ae12-4872-9c78-4aff9f1cefe4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0811 23:24:18.046040   32156 system_pods.go:61] "etcd-multinode-618164" [543135b3-5e52-43aa-af7c-1fea5cfb95b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0811 23:24:18.046048   32156 system_pods.go:61] "kindnet-clfqj" [b3e12c4b-402f-467b-a1f2-f7db2ae3d0ef] Running
	I0811 23:24:18.046052   32156 system_pods.go:61] "kindnet-m2c5t" [5264f13e-c667-4d82-912f-49c23eaf31cd] Running
	I0811 23:24:18.046059   32156 system_pods.go:61] "kindnet-szdxp" [d827d201-1ae4-4db8-858f-0fda601d5c40] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0811 23:24:18.046071   32156 system_pods.go:61] "kube-apiserver-multinode-618164" [a1145d9b-2c2a-42b1-bbe6-142472dc9d01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0811 23:24:18.046078   32156 system_pods.go:61] "kube-controller-manager-multinode-618164" [41f34044-7115-493f-94d8-53f69fd37242] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0811 23:24:18.046086   32156 system_pods.go:61] "kube-proxy-9ldtq" [ff783df6-3af7-44cf-bc60-843db8420efa] Running
	I0811 23:24:18.046092   32156 system_pods.go:61] "kube-proxy-glw45" [4616f16f-9566-447c-90cd-8e37c18508e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0811 23:24:18.046103   32156 system_pods.go:61] "kube-proxy-pv5p5" [08e6223f-0c5c-47bd-b37d-67f279f4d4be] Running
	I0811 23:24:18.046109   32156 system_pods.go:61] "kube-scheduler-multinode-618164" [b2a96d9a-e022-4abd-b8c6-e6ec3102773f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0811 23:24:18.046117   32156 system_pods.go:61] "storage-provisioner" [84ba55f6-4725-46ae-810f-130cbb82dd7f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0811 23:24:18.046123   32156 system_pods.go:74] duration metric: took 10.186574ms to wait for pod list to return data ...
	I0811 23:24:18.046132   32156 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:24:18.046176   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0811 23:24:18.046183   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.046190   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.046196   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.048881   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.048898   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.048908   32156 round_trippers.go:580]     Audit-Id: 1115fb47-264c-47dd-9ccc-f4657b13068b
	I0811 23:24:18.048917   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.048933   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.048943   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.048951   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.048956   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:17 GMT
	I0811 23:24:18.049382   32156 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"832"},"items":[{"metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13669 chars]
	I0811 23:24:18.050131   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:18.050152   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:18.050160   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:18.050164   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:18.050167   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:18.050170   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:18.050174   32156 node_conditions.go:105] duration metric: took 4.037902ms to run NodePressure ...
	I0811 23:24:18.050187   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:18.257419   32156 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0811 23:24:18.257449   32156 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0811 23:24:18.257534   32156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0811 23:24:18.257680   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0811 23:24:18.257693   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.257704   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.257714   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.260900   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.260916   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.260922   32156 round_trippers.go:580]     Audit-Id: 288a3947-3654-49b4-8986-603058e388e2
	I0811 23:24:18.260927   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.260938   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.260951   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.260960   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.260974   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.261409   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"765","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 29734 chars]
	I0811 23:24:18.262407   32156 kubeadm.go:787] kubelet initialised
	I0811 23:24:18.262423   32156 kubeadm.go:788] duration metric: took 4.87217ms waiting for restarted kubelet to initialise ...
	I0811 23:24:18.262429   32156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:18.262470   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:18.262478   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.262485   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.262491   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.268206   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:18.268224   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.268230   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.268244   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.268250   32156 round_trippers.go:580]     Audit-Id: b6772242-1278-44fe-99c3-99f4cecfcb50
	I0811 23:24:18.268256   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.268264   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.268269   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.270875   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84507 chars]
	I0811 23:24:18.273379   32156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.273462   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:18.273475   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.273486   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.273496   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.276941   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.276963   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.276974   32156 round_trippers.go:580]     Audit-Id: a5ac1403-abc1-4d4a-a0d4-e104245882e2
	I0811 23:24:18.276983   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.276992   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.277008   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.277016   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.277028   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.277829   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:18.278330   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.278344   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.278351   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.278357   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.280756   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.280772   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.280781   32156 round_trippers.go:580]     Audit-Id: d3b3c3fb-85f4-4a71-b869-560c44353ecf
	I0811 23:24:18.280791   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.280800   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.280814   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.280819   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.280824   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.280964   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.281343   32156 pod_ready.go:97] node "multinode-618164" hosting pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.281363   32156 pod_ready.go:81] duration metric: took 7.962993ms waiting for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.281370   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.281376   32156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.281421   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-618164
	I0811 23:24:18.281428   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.281434   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.281440   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.283955   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.283969   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.283975   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.283983   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.283992   32156 round_trippers.go:580]     Audit-Id: 2bc5beb2-e91a-470f-a60e-574b311bcaf5
	I0811 23:24:18.284002   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.284010   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.284026   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.284630   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"765","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6285 chars]
	I0811 23:24:18.284979   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.284990   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.284997   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.285005   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.286960   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:18.286979   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.286988   32156 round_trippers.go:580]     Audit-Id: 0b900ef5-d36c-4f31-89e0-0348ff68b814
	I0811 23:24:18.286997   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.287010   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.287019   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.287031   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.287044   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.287186   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.287464   32156 pod_ready.go:97] node "multinode-618164" hosting pod "etcd-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.287480   32156 pod_ready.go:81] duration metric: took 6.092582ms waiting for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.287488   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "etcd-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.287509   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.287587   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-618164
	I0811 23:24:18.287597   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.287607   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.287619   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.290857   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.290876   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.290885   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.290894   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.290908   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.290917   32156 round_trippers.go:580]     Audit-Id: a1066911-3b42-4ace-aea0-51ce2cd88bac
	I0811 23:24:18.290925   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.290931   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.291085   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-618164","namespace":"kube-system","uid":"a1145d9b-2c2a-42b1-bbe6-142472dc9d01","resourceVersion":"769","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.6:8443","kubernetes.io/config.hash":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.mirror":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.seen":"2023-08-11T23:20:15.427440318Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I0811 23:24:18.291573   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.291592   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.291603   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.291616   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.293435   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:18.293447   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.293453   32156 round_trippers.go:580]     Audit-Id: 79ca5249-4ff5-4112-900e-72efee7e30fb
	I0811 23:24:18.293458   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.293463   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.293468   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.293480   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.293498   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.293712   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.294048   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-apiserver-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.294068   32156 pod_ready.go:81] duration metric: took 6.520131ms waiting for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.294075   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-apiserver-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.294083   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.294134   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-618164
	I0811 23:24:18.294141   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.294148   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.294154   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.295834   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:18.295846   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.295852   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.295858   32156 round_trippers.go:580]     Audit-Id: 7c5089e7-4175-428e-ac85-0acd8a061636
	I0811 23:24:18.295863   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.295877   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.295885   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.295907   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.296220   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-618164","namespace":"kube-system","uid":"41f34044-7115-493f-94d8-53f69fd37242","resourceVersion":"770","creationTimestamp":"2023-08-11T23:20:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.mirror":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.seen":"2023-08-11T23:20:06.002920339Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I0811 23:24:18.436947   32156 request.go:628] Waited for 140.30031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.437004   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.437009   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.437021   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.437030   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.440103   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.440125   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.440135   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.440144   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.440153   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.440163   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.440172   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.440183   32156 round_trippers.go:580]     Audit-Id: 2651f9fb-6e9f-4069-9400-cf213560fc66
	I0811 23:24:18.440601   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.440908   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-controller-manager-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.440931   32156 pod_ready.go:81] duration metric: took 146.836208ms waiting for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.440941   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-controller-manager-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.440957   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.636431   32156 request.go:628] Waited for 195.407374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:24:18.636505   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:24:18.636510   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.636517   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.636524   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.640067   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.640085   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.640092   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.640098   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.640106   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.640115   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.640125   32156 round_trippers.go:580]     Audit-Id: 032b360e-0f94-45d6-af15-c6160aa8c3a5
	I0811 23:24:18.640134   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.640639   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9ldtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff783df6-3af7-44cf-bc60-843db8420efa","resourceVersion":"534","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0811 23:24:18.836514   32156 request.go:628] Waited for 195.424526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:24:18.836577   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:24:18.836584   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.836595   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.836610   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.839277   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.839296   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.839303   32156 round_trippers.go:580]     Audit-Id: a85e7486-69f1-4ee8-a5bb-7113c8d7c0ad
	I0811 23:24:18.839311   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.839322   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.839333   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.839343   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.839352   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.839527   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"5117de97-d432-4fe0-baad-4ef71b0a5470","resourceVersion":"599","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3267 chars]
	I0811 23:24:18.839884   32156 pod_ready.go:92] pod "kube-proxy-9ldtq" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:18.839904   32156 pod_ready.go:81] duration metric: took 398.937925ms waiting for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.839918   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.036269   32156 request.go:628] Waited for 196.273614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:19.036317   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:19.036327   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.036338   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.036350   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.039088   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:19.039124   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.039135   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:19.039146   32156 round_trippers.go:580]     Audit-Id: 79dfe5b2-f19e-4a9b-9100-e3671b291ec3
	I0811 23:24:19.039162   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.039171   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.039183   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.039196   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.039380   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-glw45","generateName":"kube-proxy-","namespace":"kube-system","uid":"4616f16f-9566-447c-90cd-8e37c18508e3","resourceVersion":"768","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5924 chars]
	I0811 23:24:19.236148   32156 request.go:628] Waited for 196.339332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:19.236220   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:19.236238   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.236250   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.236256   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.240044   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:19.240061   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.240067   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.240073   32156 round_trippers.go:580]     Audit-Id: 789f221c-4319-45a0-935c-e22bc9b67be5
	I0811 23:24:19.240085   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.240102   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.240110   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.240122   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.240427   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:19.240725   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-proxy-glw45" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:19.240742   32156 pod_ready.go:81] duration metric: took 400.81245ms waiting for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:19.240749   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-proxy-glw45" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:19.240760   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.436165   32156 request.go:628] Waited for 195.331627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:19.436247   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:19.436257   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.436269   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.436279   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.439042   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:19.439061   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.439068   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.439074   32156 round_trippers.go:580]     Audit-Id: 2eb0809d-c710-4504-8605-d3ee1964d272
	I0811 23:24:19.439082   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.439120   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.439131   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.439138   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.439453   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pv5p5","generateName":"kube-proxy-","namespace":"kube-system","uid":"08e6223f-0c5c-47bd-b37d-67f279f4d4be","resourceVersion":"737","creationTimestamp":"2023-08-11T23:22:07Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0811 23:24:19.636165   32156 request.go:628] Waited for 196.302622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:19.636222   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:19.636229   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.636241   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.636251   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.639380   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:19.639403   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.639413   32156 round_trippers.go:580]     Audit-Id: 7c3eeb1c-5858-473c-a93b-eabca2a09765
	I0811 23:24:19.639420   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.639429   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.639442   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.639451   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.639461   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.639777   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m03","uid":"84060722-cb59-478c-9b01-7517a6ae9f59","resourceVersion":"756","creationTimestamp":"2023-08-11T23:22:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0811 23:24:19.640005   32156 pod_ready.go:92] pod "kube-proxy-pv5p5" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:19.640024   32156 pod_ready.go:81] duration metric: took 399.251193ms waiting for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.640032   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.836296   32156 request.go:628] Waited for 196.176722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:19.836345   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:19.836350   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.836357   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.836363   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.839606   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:19.839628   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.839638   32156 round_trippers.go:580]     Audit-Id: 31b363d3-c633-4e2f-92bd-7a466addde38
	I0811 23:24:19.839647   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.839655   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.839664   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.839670   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.839675   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.839905   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-618164","namespace":"kube-system","uid":"b2a96d9a-e022-4abd-b8c6-e6ec3102773f","resourceVersion":"764","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.mirror":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.seen":"2023-08-11T23:20:15.427437689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5144 chars]
	I0811 23:24:20.036703   32156 request.go:628] Waited for 196.353181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.036768   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.036773   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.036781   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.036788   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.039687   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:20.039710   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.039720   32156 round_trippers.go:580]     Audit-Id: 707357c6-9627-4117-b85d-0cae27545e67
	I0811 23:24:20.039727   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.039735   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.039746   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.039755   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.039777   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:20.039957   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:20.040361   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-scheduler-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:20.040379   32156 pod_ready.go:81] duration metric: took 400.341096ms waiting for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:20.040390   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-scheduler-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:20.040398   32156 pod_ready.go:38] duration metric: took 1.77796235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:20.040416   32156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:24:20.051216   32156 command_runner.go:130] > -16
	I0811 23:24:20.051419   32156 ops.go:34] apiserver oom_adj: -16
	I0811 23:24:20.051435   32156 kubeadm.go:640] restartCluster took 22.996014062s
	I0811 23:24:20.051445   32156 kubeadm.go:406] StartCluster complete in 23.031064441s
	I0811 23:24:20.051465   32156 settings.go:142] acquiring lock: {Name:mkdad93b07c8b1c16ba23107571d2c5baafb252d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:20.051564   32156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:24:20.052285   32156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-9593/kubeconfig: {Name:mk5d0cc13acd7d86edf0e41f0198b0f7dd85af9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:20.052541   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:24:20.052672   32156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:24:20.055298   32156 out.go:177] * Enabled addons: 
	I0811 23:24:20.052854   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:24:20.052880   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:24:20.056871   32156 addons.go:502] enable addons completed in 4.189089ms: enabled=[]
	I0811 23:24:20.057059   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:24:20.057318   32156 round_trippers.go:463] GET https://192.168.39.6:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:24:20.057329   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.057336   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.057342   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.060017   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:20.060032   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.060039   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.060044   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.060049   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.060056   32156 round_trippers.go:580]     Content-Length: 291
	I0811 23:24:20.060061   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.060067   32156 round_trippers.go:580]     Audit-Id: a83b0b99-a1a4-4098-871a-02d028f721ef
	I0811 23:24:20.060075   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.060091   32156 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31aef6c0-c84e-4384-9e6e-68f0c22e59ba","resourceVersion":"833","creationTimestamp":"2023-08-11T23:20:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 23:24:20.060220   32156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-618164" context rescaled to 1 replicas
	I0811 23:24:20.060243   32156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0811 23:24:20.061985   32156 out.go:177] * Verifying Kubernetes components...
	I0811 23:24:20.063494   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:24:20.145035   32156 command_runner.go:130] > apiVersion: v1
	I0811 23:24:20.145057   32156 command_runner.go:130] > data:
	I0811 23:24:20.145061   32156 command_runner.go:130] >   Corefile: |
	I0811 23:24:20.145065   32156 command_runner.go:130] >     .:53 {
	I0811 23:24:20.145069   32156 command_runner.go:130] >         log
	I0811 23:24:20.145074   32156 command_runner.go:130] >         errors
	I0811 23:24:20.145077   32156 command_runner.go:130] >         health {
	I0811 23:24:20.145082   32156 command_runner.go:130] >            lameduck 5s
	I0811 23:24:20.145085   32156 command_runner.go:130] >         }
	I0811 23:24:20.145089   32156 command_runner.go:130] >         ready
	I0811 23:24:20.145094   32156 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0811 23:24:20.145098   32156 command_runner.go:130] >            pods insecure
	I0811 23:24:20.145104   32156 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0811 23:24:20.145108   32156 command_runner.go:130] >            ttl 30
	I0811 23:24:20.145111   32156 command_runner.go:130] >         }
	I0811 23:24:20.145119   32156 command_runner.go:130] >         prometheus :9153
	I0811 23:24:20.145122   32156 command_runner.go:130] >         hosts {
	I0811 23:24:20.145127   32156 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0811 23:24:20.145131   32156 command_runner.go:130] >            fallthrough
	I0811 23:24:20.145136   32156 command_runner.go:130] >         }
	I0811 23:24:20.145144   32156 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0811 23:24:20.145151   32156 command_runner.go:130] >            max_concurrent 1000
	I0811 23:24:20.145156   32156 command_runner.go:130] >         }
	I0811 23:24:20.145163   32156 command_runner.go:130] >         cache 30
	I0811 23:24:20.145170   32156 command_runner.go:130] >         loop
	I0811 23:24:20.145181   32156 command_runner.go:130] >         reload
	I0811 23:24:20.145186   32156 command_runner.go:130] >         loadbalance
	I0811 23:24:20.145190   32156 command_runner.go:130] >     }
	I0811 23:24:20.145199   32156 command_runner.go:130] > kind: ConfigMap
	I0811 23:24:20.145203   32156 command_runner.go:130] > metadata:
	I0811 23:24:20.145208   32156 command_runner.go:130] >   creationTimestamp: "2023-08-11T23:20:15Z"
	I0811 23:24:20.145213   32156 command_runner.go:130] >   name: coredns
	I0811 23:24:20.145217   32156 command_runner.go:130] >   namespace: kube-system
	I0811 23:24:20.145223   32156 command_runner.go:130] >   resourceVersion: "413"
	I0811 23:24:20.145228   32156 command_runner.go:130] >   uid: e0a1f713-20c0-4280-a782-fa6099258ac8
	I0811 23:24:20.147599   32156 node_ready.go:35] waiting up to 6m0s for node "multinode-618164" to be "Ready" ...
	I0811 23:24:20.147816   32156 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0811 23:24:20.236926   32156 request.go:628] Waited for 89.227881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.236974   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.236979   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.236986   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.236993   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.239598   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:20.239623   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.239633   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.239642   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.239651   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.239659   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.239668   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.239681   32156 round_trippers.go:580]     Audit-Id: bd732ff7-ef51-4182-bdb8-dc8d4ee266e2
	I0811 23:24:20.239836   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:20.436686   32156 request.go:628] Waited for 196.404221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.436745   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.436751   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.436759   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.436767   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.439787   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:20.439829   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.439843   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.439855   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.439864   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.439875   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.439888   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.439900   32156 round_trippers.go:580]     Audit-Id: bfae5fd6-0f94-45e5-b774-b048e32b1889
	I0811 23:24:20.440000   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:20.941178   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.941201   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.941208   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.941224   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.944698   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:20.944727   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.944738   32156 round_trippers.go:580]     Audit-Id: 4d39ad7e-f72c-4a38-9451-ece4ac72751e
	I0811 23:24:20.944747   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.944763   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.944772   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.944784   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.944793   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.944928   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:21.441558   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:21.441583   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:21.441595   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:21.441607   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:21.444757   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:21.444783   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:21.444793   32156 round_trippers.go:580]     Audit-Id: 44b98dc6-91a0-487b-86fb-0835dca1c6b4
	I0811 23:24:21.444802   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:21.444826   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:21.444835   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:21.444847   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:21.444859   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:21 GMT
	I0811 23:24:21.444980   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:21.940575   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:21.940609   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:21.940617   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:21.940623   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:21.943896   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:21.943933   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:21.943943   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:21.943949   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:21.943955   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:21.943960   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:21 GMT
	I0811 23:24:21.943966   32156 round_trippers.go:580]     Audit-Id: fc5b0e1f-9211-4a46-8524-8219d022c1af
	I0811 23:24:21.943971   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:21.944109   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:22.440655   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:22.440679   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:22.440688   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:22.440698   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:22.443876   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:22.443898   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:22.443905   32156 round_trippers.go:580]     Audit-Id: e532448d-981d-4f23-805b-c68ac2a9a08f
	I0811 23:24:22.443911   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:22.443917   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:22.443922   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:22.443928   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:22.443933   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:22 GMT
	I0811 23:24:22.444267   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:22.444795   32156 node_ready.go:58] node "multinode-618164" has status "Ready":"False"
	I0811 23:24:22.941424   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:22.941444   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:22.941453   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:22.941459   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:22.944289   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:22.944313   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:22.944323   32156 round_trippers.go:580]     Audit-Id: 08cc924f-7ce0-4b24-85df-f5a51a3e2025
	I0811 23:24:22.944332   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:22.944341   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:22.944357   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:22.944375   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:22.944383   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:22 GMT
	I0811 23:24:22.944797   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:23.441459   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:23.441477   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:23.441486   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:23.441493   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:23.444128   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:23.444150   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:23.444160   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:23 GMT
	I0811 23:24:23.444168   32156 round_trippers.go:580]     Audit-Id: 844f5b54-e7ae-4bea-83cd-d78e30dd0397
	I0811 23:24:23.444176   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:23.444188   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:23.444205   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:23.444220   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:23.444765   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:23.941480   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:23.941503   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:23.941511   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:23.941517   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:23.944442   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:23.944459   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:23.944466   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:23.944471   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:23.944477   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:23.944485   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:23 GMT
	I0811 23:24:23.944494   32156 round_trippers.go:580]     Audit-Id: 08a80851-b8d5-4b93-b866-6ee39106a699
	I0811 23:24:23.944502   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:23.945447   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:24.441214   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:24.441236   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:24.441244   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:24.441250   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:24.444325   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:24.444351   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:24.444362   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:24.444372   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:24.444386   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:24 GMT
	I0811 23:24:24.444395   32156 round_trippers.go:580]     Audit-Id: 1ab7d34d-acba-42ef-b792-3a794c320756
	I0811 23:24:24.444405   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:24.444413   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:24.444620   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:24.444908   32156 node_ready.go:58] node "multinode-618164" has status "Ready":"False"
	I0811 23:24:24.941357   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:24.941382   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:24.941395   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:24.941405   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:24.944213   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:24.944231   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:24.944238   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:24.944244   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:24.944249   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:24.944254   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:24.944259   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:24 GMT
	I0811 23:24:24.944264   32156 round_trippers.go:580]     Audit-Id: 5fbafed4-32b9-4e2b-9c78-ad816b8fc27e
	I0811 23:24:24.944811   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:25.441509   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.441547   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.441560   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.441570   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.444341   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.444365   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.444375   32156 round_trippers.go:580]     Audit-Id: 6e6eddea-aee5-42ea-895b-99ad1a0d559a
	I0811 23:24:25.444385   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.444393   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.444405   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.444415   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.444427   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.445015   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:25.445314   32156 node_ready.go:49] node "multinode-618164" has status "Ready":"True"
	I0811 23:24:25.445327   32156 node_ready.go:38] duration metric: took 5.2977013s waiting for node "multinode-618164" to be "Ready" ...
	I0811 23:24:25.445334   32156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:25.445379   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:25.445387   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.445393   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.445399   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.452024   32156 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0811 23:24:25.452049   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.452058   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.452067   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.452075   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.452084   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.452092   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.452118   32156 round_trippers.go:580]     Audit-Id: 82e466c8-8aed-4196-bb8c-bc86da79a214
	I0811 23:24:25.453659   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"854"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83655 chars]
	I0811 23:24:25.456189   32156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:25.456260   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:25.456269   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.456276   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.456282   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.458591   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.458605   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.458611   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.458617   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.458625   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.458634   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.458652   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.458664   32156 round_trippers.go:580]     Audit-Id: 445efbee-ecd6-473e-adb2-4d52dc200b71
	I0811 23:24:25.458879   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:25.459390   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.459406   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.459414   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.459420   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.461583   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.461596   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.461603   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.461614   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.461623   32156 round_trippers.go:580]     Audit-Id: e5ef9b96-a743-4573-af95-f8506478ec65
	I0811 23:24:25.461638   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.461646   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.461654   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.461797   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:25.462204   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:25.462221   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.462231   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.462240   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.464104   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:25.464116   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.464122   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.464127   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.464132   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.464139   32156 round_trippers.go:580]     Audit-Id: dcb642da-9e5b-483a-9041-800cd982e1ff
	I0811 23:24:25.464149   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.464159   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.464315   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:25.464753   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.464766   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.464773   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.464779   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.466613   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:25.466624   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.466629   32156 round_trippers.go:580]     Audit-Id: ce0caab4-a354-4b34-94c5-2db6f3d60119
	I0811 23:24:25.466635   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.466640   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.466645   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.466651   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.466656   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.466939   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:25.968030   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:25.968056   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.968069   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.968080   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.971619   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:25.971636   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.971643   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.971648   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.971653   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.971659   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.971665   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.971670   32156 round_trippers.go:580]     Audit-Id: 8fcb4987-acec-4727-b923-e632bfd490f1
	I0811 23:24:25.972079   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:25.972606   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.972621   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.972629   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.972635   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.975228   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.975242   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.975257   32156 round_trippers.go:580]     Audit-Id: d3f41a02-cff5-4e62-9ee7-86bb23f78203
	I0811 23:24:25.975266   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.975277   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.975290   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.975299   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.975308   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.975487   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:26.468217   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:26.468240   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.468249   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.468255   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.471094   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:26.471129   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.471140   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.471150   32156 round_trippers.go:580]     Audit-Id: 74af3dc7-7cdc-455d-8af5-ac368043c3df
	I0811 23:24:26.471157   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.471162   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.471168   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.471173   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.471267   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:26.471810   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:26.471827   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.471916   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.471945   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.474066   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:26.474087   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.474096   32156 round_trippers.go:580]     Audit-Id: e0fee66f-d6c0-4902-8c74-e56c0be1588a
	I0811 23:24:26.474106   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.474115   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.474124   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.474132   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.474142   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.474302   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:26.967963   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:26.967993   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.968001   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.968008   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.971201   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:26.971221   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.971232   32156 round_trippers.go:580]     Audit-Id: 959f34ad-81ce-44b3-8e51-0c9e243c77f1
	I0811 23:24:26.971240   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.971248   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.971257   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.971268   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.971278   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.971462   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:26.971902   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:26.971917   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.971928   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.971938   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.974142   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:26.974157   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.974167   32156 round_trippers.go:580]     Audit-Id: 3f13bbb4-ab95-414f-b33c-7be2638a258a
	I0811 23:24:26.974175   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.974184   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.974193   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.974204   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.974215   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.974372   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:27.468239   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:27.468261   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.468271   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.468281   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.471150   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:27.471167   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.471177   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.471189   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.471199   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.471210   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.471224   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.471234   32156 round_trippers.go:580]     Audit-Id: 6d98046d-8925-4643-9ee3-138901d7afdb
	I0811 23:24:27.471416   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:27.471910   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:27.471924   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.471932   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.471938   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.474501   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:27.474515   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.474524   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.474534   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.474543   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.474552   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.474570   32156 round_trippers.go:580]     Audit-Id: 9cbe4b75-dcc6-4f58-ba2f-34b7a1a2ae2a
	I0811 23:24:27.474579   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.474951   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:27.475250   32156 pod_ready.go:102] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"False"
	I0811 23:24:27.967698   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:27.967718   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.967728   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.967736   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.972072   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:27.972092   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.972102   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.972110   32156 round_trippers.go:580]     Audit-Id: 87ecb69a-34dd-45e3-bd25-8f383943fed6
	I0811 23:24:27.972117   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.972125   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.972133   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.972145   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.973050   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:27.973501   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:27.973517   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.973527   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.973537   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.975479   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:27.975495   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.975505   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.975514   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.975523   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.975533   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.975543   32156 round_trippers.go:580]     Audit-Id: 1fc8299b-aa16-4865-a90e-3fcb5c8af967
	I0811 23:24:27.975558   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.975656   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:28.468325   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:28.468347   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.468355   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.468361   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.471902   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:28.471919   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.471926   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.471932   32156 round_trippers.go:580]     Audit-Id: 9548ec6b-6248-4c27-8c8d-925526fdd392
	I0811 23:24:28.471937   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.471942   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.471951   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.471960   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.472051   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:28.472591   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:28.472608   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.472619   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.472628   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.474945   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:28.474960   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.474967   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.474975   32156 round_trippers.go:580]     Audit-Id: bfb07ccf-8b76-446d-bec7-40e616635bc9
	I0811 23:24:28.474984   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.474995   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.475007   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.475019   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.475145   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:28.967726   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:28.967751   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.967760   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.967770   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.971148   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:28.971167   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.971176   32156 round_trippers.go:580]     Audit-Id: 0e3ff461-0ca9-48c8-9d94-b83189531448
	I0811 23:24:28.971195   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.971205   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.971216   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.971229   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.971240   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.971345   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:28.971806   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:28.971819   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.971826   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.971832   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.974333   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:28.974350   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.974359   32156 round_trippers.go:580]     Audit-Id: 185f62d2-498b-4554-bea9-486df9494c75
	I0811 23:24:28.974370   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.974379   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.974388   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.974397   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.974407   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.974521   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:29.468235   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:29.468258   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.468269   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.468278   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.470941   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.470960   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.470969   32156 round_trippers.go:580]     Audit-Id: b572a3bd-c4d9-4159-9087-e708e5ed6c6b
	I0811 23:24:29.470977   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.470985   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.470993   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.471001   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.471011   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.471290   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:29.471705   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:29.471725   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.471732   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.471738   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.473815   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.473830   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.473839   32156 round_trippers.go:580]     Audit-Id: 0e5fb9bb-9507-4244-8f5f-0631e1f00524
	I0811 23:24:29.473847   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.473855   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.473864   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.473874   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.473884   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.474042   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:29.967622   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:29.967656   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.967664   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.967670   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.970608   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.970623   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.970629   32156 round_trippers.go:580]     Audit-Id: 82c7acea-b9cb-440e-9e7e-fb32945e6cce
	I0811 23:24:29.970635   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.970643   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.970652   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.970662   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.970672   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.970998   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:29.971596   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:29.971616   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.971628   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.971641   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.974243   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.974261   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.974271   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.974280   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.974293   32156 round_trippers.go:580]     Audit-Id: ee208613-2a56-4f9b-abbd-a640827d3198
	I0811 23:24:29.974302   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.974310   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.974316   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.974467   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:29.974930   32156 pod_ready.go:102] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"False"
	I0811 23:24:30.468073   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:30.468099   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.468110   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.468120   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.470897   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:30.470916   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.470929   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.470941   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.470956   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.470964   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.470976   32156 round_trippers.go:580]     Audit-Id: 82f699de-0a54-481c-bb12-f87a0daa84e9
	I0811 23:24:30.470986   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.471134   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:30.471737   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:30.471751   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.471758   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.471767   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.475683   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:30.475698   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.475704   32156 round_trippers.go:580]     Audit-Id: ac6a86d1-75a4-46ae-807f-7ebfd31289bc
	I0811 23:24:30.475710   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.475720   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.475728   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.475738   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.475746   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.475902   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:30.967564   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:30.967586   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.967594   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.967601   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.972202   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:30.972221   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.972229   32156 round_trippers.go:580]     Audit-Id: e7a2e0c8-d59f-43ca-85fa-026ce0fe0d76
	I0811 23:24:30.972236   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.972247   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.972258   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.972267   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.972280   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.972499   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:30.973091   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:30.973113   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.973124   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.973139   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.977456   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:30.977471   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.977477   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.977485   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.977497   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.977506   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.977518   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.977528   32156 round_trippers.go:580]     Audit-Id: 381e4024-f9fe-4403-9375-88a00358975b
	I0811 23:24:30.977682   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:31.468315   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:31.468335   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.468345   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.468352   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.471062   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:31.471083   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.471094   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.471130   32156 round_trippers.go:580]     Audit-Id: fa12f66b-5889-4686-8116-11fe87af94c0
	I0811 23:24:31.471144   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.471152   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.471160   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.471167   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.471409   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:31.471875   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:31.471889   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.471896   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.471906   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.473962   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:31.473981   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.473991   32156 round_trippers.go:580]     Audit-Id: f7f3b369-6798-4044-b5ee-de737997014c
	I0811 23:24:31.473999   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.474008   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.474020   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.474032   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.474044   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.474263   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:31.967951   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:31.967972   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.967980   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.967986   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.971967   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:31.971990   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.972001   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.972008   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.972016   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.972024   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.972033   32156 round_trippers.go:580]     Audit-Id: 9fd3cb53-2603-40b0-bd50-00a987b1e227
	I0811 23:24:31.972042   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.972171   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:31.972605   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:31.972619   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.972629   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.972637   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.974757   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:31.974771   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.974780   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.974789   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.974799   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.974815   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.974823   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.974833   32156 round_trippers.go:580]     Audit-Id: 85d0f6ae-272d-4dc2-a561-40c101a5161a
	I0811 23:24:31.975024   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:31.975405   32156 pod_ready.go:102] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"False"
	I0811 23:24:32.468362   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:32.468382   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.468393   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.468402   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.473825   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:32.473845   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.473855   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.473863   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.473872   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.473881   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.473892   32156 round_trippers.go:580]     Audit-Id: c552d16d-a3b4-4d66-a376-eefd1c10eb1e
	I0811 23:24:32.473902   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.474014   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:32.474477   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:32.474490   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.474501   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.474510   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.477314   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:32.477331   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.477340   32156 round_trippers.go:580]     Audit-Id: 4e9ec59d-c226-4b0b-a98b-dba59305efde
	I0811 23:24:32.477348   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.477356   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.477365   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.477377   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.477387   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.477551   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:32.967630   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:32.967651   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.967659   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.967665   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.970961   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:32.970980   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.970990   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.971000   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.971013   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.971026   32156 round_trippers.go:580]     Audit-Id: 224adbd2-8bc3-4670-a026-066c23e93164
	I0811 23:24:32.971038   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.971048   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.971326   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"878","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0811 23:24:32.971757   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:32.971769   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.971776   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.971782   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.974205   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:32.974220   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.974229   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.974238   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.974248   32156 round_trippers.go:580]     Audit-Id: 0d872ae4-4e9b-4635-9e65-ba775b7a8de7
	I0811 23:24:32.974259   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.974268   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.974281   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.974584   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.468329   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:33.468352   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.468363   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.468371   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.472124   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:33.472147   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.472154   32156 round_trippers.go:580]     Audit-Id: 2ae31ed4-b404-43c6-aa1e-25c8cc2fb9f7
	I0811 23:24:33.472160   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.472166   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.472171   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.472177   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.472183   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.472683   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"878","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0811 23:24:33.473126   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.473138   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.473145   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.473151   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.475438   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.475460   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.475470   32156 round_trippers.go:580]     Audit-Id: e66ada46-5f8a-42d5-bcd0-9776162a1903
	I0811 23:24:33.475479   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.475493   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.475502   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.475511   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.475517   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.475604   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.968224   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:33.968246   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.968255   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.968261   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.971186   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.971206   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.971219   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.971227   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.971235   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.971246   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.971254   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.971266   32156 round_trippers.go:580]     Audit-Id: 89391019-02eb-4a7a-97b0-c7942170203a
	I0811 23:24:33.971602   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0811 23:24:33.972003   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.972014   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.972021   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.972027   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.974433   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.974452   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.974460   32156 round_trippers.go:580]     Audit-Id: fab66579-ff2f-4e3c-ace4-bb6c130e597c
	I0811 23:24:33.974466   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.974471   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.974480   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.974488   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.974498   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.975061   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.975340   32156 pod_ready.go:92] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.975355   32156 pod_ready.go:81] duration metric: took 8.519145326s waiting for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.975363   32156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.975402   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-618164
	I0811 23:24:33.975409   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.975416   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.975422   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.977484   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.977500   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.977507   32156 round_trippers.go:580]     Audit-Id: 6eac740e-8b4f-45e4-a18e-1f84084abe24
	I0811 23:24:33.977512   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.977517   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.977526   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.977531   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.977537   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.977667   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"868","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I0811 23:24:33.977982   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.977992   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.977998   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.978006   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.980986   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.981000   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.981006   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.981011   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.981016   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.981025   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.981041   32156 round_trippers.go:580]     Audit-Id: 08f0ed9b-c430-4199-9103-44ca5d887cec
	I0811 23:24:33.981050   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.981179   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.981412   32156 pod_ready.go:92] pod "etcd-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.981423   32156 pod_ready.go:81] duration metric: took 6.055093ms waiting for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.981438   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.981483   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-618164
	I0811 23:24:33.981490   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.981496   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.981502   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.983575   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.983593   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.983600   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.983608   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.983613   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.983621   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.983627   32156 round_trippers.go:580]     Audit-Id: 2fc96f46-941e-43f2-be3d-0a8a75940bcc
	I0811 23:24:33.983634   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.983776   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-618164","namespace":"kube-system","uid":"a1145d9b-2c2a-42b1-bbe6-142472dc9d01","resourceVersion":"870","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.6:8443","kubernetes.io/config.hash":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.mirror":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.seen":"2023-08-11T23:20:15.427440318Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7597 chars]
	I0811 23:24:33.984096   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.984106   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.984112   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.984118   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.985746   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:33.985762   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.985768   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.985774   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.985782   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.985788   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.985796   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.985801   32156 round_trippers.go:580]     Audit-Id: 648f59a3-f4c1-456b-bc5b-9e6c40876052
	I0811 23:24:33.985939   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.986168   32156 pod_ready.go:92] pod "kube-apiserver-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.986178   32156 pod_ready.go:81] duration metric: took 4.731192ms waiting for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.986186   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.986220   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-618164
	I0811 23:24:33.986227   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.986234   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.986240   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.988258   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.988273   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.988280   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.988286   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.988293   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.988299   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.988312   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.988322   32156 round_trippers.go:580]     Audit-Id: 5b6aa986-f47f-4a3f-84d3-e0186ec0151d
	I0811 23:24:33.988838   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-618164","namespace":"kube-system","uid":"41f34044-7115-493f-94d8-53f69fd37242","resourceVersion":"848","creationTimestamp":"2023-08-11T23:20:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.mirror":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.seen":"2023-08-11T23:20:06.002920339Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7170 chars]
	I0811 23:24:33.989165   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.989175   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.989182   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.989188   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.990811   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:33.990824   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.990833   32156 round_trippers.go:580]     Audit-Id: 19482bca-52fd-4f68-b367-dd9b5777c7e5
	I0811 23:24:33.990841   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.990847   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.990853   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.990859   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.990869   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.991010   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.991294   32156 pod_ready.go:92] pod "kube-controller-manager-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.991308   32156 pod_ready.go:81] duration metric: took 5.116437ms waiting for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.991315   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.991359   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:24:33.991373   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.991382   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.991392   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.993626   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.993640   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.993651   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.993660   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.993669   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.993675   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.993685   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.993690   32156 round_trippers.go:580]     Audit-Id: 8deac8a8-7fc0-4662-9a6a-98a6486d95b7
	I0811 23:24:33.993919   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9ldtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff783df6-3af7-44cf-bc60-843db8420efa","resourceVersion":"534","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0811 23:24:33.994228   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:24:33.994239   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.994247   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.994253   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.995880   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:33.995892   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.995898   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.995903   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.995909   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.995914   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.995920   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.995925   32156 round_trippers.go:580]     Audit-Id: 87b8b550-3255-4a93-b277-cc6dd7ee6bc1
	I0811 23:24:33.996105   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"5117de97-d432-4fe0-baad-4ef71b0a5470","resourceVersion":"599","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3267 chars]
	I0811 23:24:33.996285   32156 pod_ready.go:92] pod "kube-proxy-9ldtq" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.996295   32156 pod_ready.go:81] duration metric: took 4.975043ms waiting for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.996302   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.168683   32156 request.go:628] Waited for 172.32863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:34.168758   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:34.168764   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.168773   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.168782   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.172087   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:34.172105   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.172111   32156 round_trippers.go:580]     Audit-Id: b9389cec-75af-4f94-8a9d-7240b0bfd7f6
	I0811 23:24:34.172117   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.172126   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.172132   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.172140   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.172145   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.172410   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-glw45","generateName":"kube-proxy-","namespace":"kube-system","uid":"4616f16f-9566-447c-90cd-8e37c18508e3","resourceVersion":"843","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I0811 23:24:34.369108   32156 request.go:628] Waited for 196.33367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:34.369196   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:34.369204   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.369216   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.369234   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.372658   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:34.372675   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.372682   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.372688   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.372693   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.372699   32156 round_trippers.go:580]     Audit-Id: 82e96bbf-57a2-484f-bf9d-2381f69a4c81
	I0811 23:24:34.372706   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.372719   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.372919   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:34.373206   32156 pod_ready.go:92] pod "kube-proxy-glw45" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:34.373221   32156 pod_ready.go:81] duration metric: took 376.904763ms waiting for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.373234   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.568660   32156 request.go:628] Waited for 195.365222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:34.568733   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:34.568741   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.568749   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.568755   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.571454   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:34.571477   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.571487   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.571495   32156 round_trippers.go:580]     Audit-Id: 8d618cf0-88d2-47c6-9ef6-7b5170fa9cd2
	I0811 23:24:34.571503   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.571511   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.571522   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.571533   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.571863   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pv5p5","generateName":"kube-proxy-","namespace":"kube-system","uid":"08e6223f-0c5c-47bd-b37d-67f279f4d4be","resourceVersion":"737","creationTimestamp":"2023-08-11T23:22:07Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0811 23:24:34.768622   32156 request.go:628] Waited for 196.348003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:34.768682   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:34.768701   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.768711   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.768721   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.771375   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:34.771392   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.771399   32156 round_trippers.go:580]     Audit-Id: 3fbb8d82-2b28-4e58-8ae8-bacb17cfc2f9
	I0811 23:24:34.771405   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.771410   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.771415   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.771421   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.771426   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.771671   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m03","uid":"84060722-cb59-478c-9b01-7517a6ae9f59","resourceVersion":"756","creationTimestamp":"2023-08-11T23:22:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0811 23:24:34.771907   32156 pod_ready.go:92] pod "kube-proxy-pv5p5" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:34.771918   32156 pod_ready.go:81] duration metric: took 398.678555ms waiting for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.771927   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.968272   32156 request.go:628] Waited for 196.292497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:34.968344   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:34.968350   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.968360   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.968375   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.972172   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:34.972191   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.972197   32156 round_trippers.go:580]     Audit-Id: d0684e37-d5f1-424a-8a17-9bb10a0e3328
	I0811 23:24:34.972203   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.972208   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.972213   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.972219   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.972224   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.972362   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-618164","namespace":"kube-system","uid":"b2a96d9a-e022-4abd-b8c6-e6ec3102773f","resourceVersion":"871","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.mirror":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.seen":"2023-08-11T23:20:15.427437689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I0811 23:24:35.169110   32156 request.go:628] Waited for 196.363918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:35.169155   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:35.169159   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.169166   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.169172   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.171710   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:35.171731   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.171744   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.171756   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.171765   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.171774   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.171787   32156 round_trippers.go:580]     Audit-Id: d364fb7f-6e32-49c1-9e80-a4d60178f479
	I0811 23:24:35.171801   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.172414   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:35.172719   32156 pod_ready.go:92] pod "kube-scheduler-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:35.172735   32156 pod_ready.go:81] duration metric: took 400.801391ms waiting for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:35.172747   32156 pod_ready.go:38] duration metric: took 9.727404873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:35.172770   32156 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:24:35.172828   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:35.185884   32156 command_runner.go:130] > 1697
	I0811 23:24:35.186187   32156 api_server.go:72] duration metric: took 15.125922974s to wait for apiserver process to appear ...
	I0811 23:24:35.186204   32156 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:24:35.186221   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:35.192470   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0811 23:24:35.192520   32156 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0811 23:24:35.192525   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.192534   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.192541   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.193372   32156 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0811 23:24:35.193388   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.193397   32156 round_trippers.go:580]     Content-Length: 263
	I0811 23:24:35.193406   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.193414   32156 round_trippers.go:580]     Audit-Id: ca9c7d49-11cf-466b-973a-b094139ea178
	I0811 23:24:35.193422   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.193434   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.193444   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.193454   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.193473   32156 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0811 23:24:35.193520   32156 api_server.go:141] control plane version: v1.27.4
	I0811 23:24:35.193534   32156 api_server.go:131] duration metric: took 7.324354ms to wait for apiserver health ...
	I0811 23:24:35.193542   32156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:24:35.368931   32156 request.go:628] Waited for 175.311269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.368990   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.368995   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.369003   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.369010   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.374076   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:35.374102   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.374112   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.374120   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.374127   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.374135   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.374143   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.374155   32156 round_trippers.go:580]     Audit-Id: f23f6832-71ab-429b-86cd-18cc8e984ed8
	I0811 23:24:35.375597   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82891 chars]
	I0811 23:24:35.378074   32156 system_pods.go:59] 12 kube-system pods found
	I0811 23:24:35.378097   32156 system_pods.go:61] "coredns-5d78c9869d-zrmf9" [c3c83ae1-ae12-4872-9c78-4aff9f1cefe4] Running
	I0811 23:24:35.378104   32156 system_pods.go:61] "etcd-multinode-618164" [543135b3-5e52-43aa-af7c-1fea5cfb95b6] Running
	I0811 23:24:35.378113   32156 system_pods.go:61] "kindnet-clfqj" [b3e12c4b-402f-467b-a1f2-f7db2ae3d0ef] Running
	I0811 23:24:35.378118   32156 system_pods.go:61] "kindnet-m2c5t" [5264f13e-c667-4d82-912f-49c23eaf31cd] Running
	I0811 23:24:35.378124   32156 system_pods.go:61] "kindnet-szdxp" [d827d201-1ae4-4db8-858f-0fda601d5c40] Running
	I0811 23:24:35.378130   32156 system_pods.go:61] "kube-apiserver-multinode-618164" [a1145d9b-2c2a-42b1-bbe6-142472dc9d01] Running
	I0811 23:24:35.378137   32156 system_pods.go:61] "kube-controller-manager-multinode-618164" [41f34044-7115-493f-94d8-53f69fd37242] Running
	I0811 23:24:35.378148   32156 system_pods.go:61] "kube-proxy-9ldtq" [ff783df6-3af7-44cf-bc60-843db8420efa] Running
	I0811 23:24:35.378155   32156 system_pods.go:61] "kube-proxy-glw45" [4616f16f-9566-447c-90cd-8e37c18508e3] Running
	I0811 23:24:35.378161   32156 system_pods.go:61] "kube-proxy-pv5p5" [08e6223f-0c5c-47bd-b37d-67f279f4d4be] Running
	I0811 23:24:35.378169   32156 system_pods.go:61] "kube-scheduler-multinode-618164" [b2a96d9a-e022-4abd-b8c6-e6ec3102773f] Running
	I0811 23:24:35.378176   32156 system_pods.go:61] "storage-provisioner" [84ba55f6-4725-46ae-810f-130cbb82dd7f] Running
	I0811 23:24:35.378185   32156 system_pods.go:74] duration metric: took 184.636196ms to wait for pod list to return data ...
	I0811 23:24:35.378196   32156 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:24:35.568633   32156 request.go:628] Waited for 190.369653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:24:35.568710   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:24:35.568718   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.568728   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.568748   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.571469   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:35.571512   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.571522   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.571532   32156 round_trippers.go:580]     Audit-Id: e42f39e6-9916-4002-b539-06cfc6cba17e
	I0811 23:24:35.571543   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.571554   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.571567   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.571577   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.571594   32156 round_trippers.go:580]     Content-Length: 261
	I0811 23:24:35.571617   32156 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"892"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"917f0a1c-39f6-4f23-806b-10a0703a649d","resourceVersion":"350","creationTimestamp":"2023-08-11T23:20:27Z"}}]}
	I0811 23:24:35.571798   32156 default_sa.go:45] found service account: "default"
	I0811 23:24:35.571813   32156 default_sa.go:55] duration metric: took 193.611319ms for default service account to be created ...
	I0811 23:24:35.571823   32156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:24:35.769307   32156 request.go:628] Waited for 197.386177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.769371   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.769379   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.769390   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.769407   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.774853   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:35.774883   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.774893   32156 round_trippers.go:580]     Audit-Id: 1befc1fe-531e-4081-8838-356f524138aa
	I0811 23:24:35.774901   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.774908   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.774916   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.774924   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.774934   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.777324   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82891 chars]
	I0811 23:24:35.780807   32156 system_pods.go:86] 12 kube-system pods found
	I0811 23:24:35.780828   32156 system_pods.go:89] "coredns-5d78c9869d-zrmf9" [c3c83ae1-ae12-4872-9c78-4aff9f1cefe4] Running
	I0811 23:24:35.780834   32156 system_pods.go:89] "etcd-multinode-618164" [543135b3-5e52-43aa-af7c-1fea5cfb95b6] Running
	I0811 23:24:35.780838   32156 system_pods.go:89] "kindnet-clfqj" [b3e12c4b-402f-467b-a1f2-f7db2ae3d0ef] Running
	I0811 23:24:35.780841   32156 system_pods.go:89] "kindnet-m2c5t" [5264f13e-c667-4d82-912f-49c23eaf31cd] Running
	I0811 23:24:35.780845   32156 system_pods.go:89] "kindnet-szdxp" [d827d201-1ae4-4db8-858f-0fda601d5c40] Running
	I0811 23:24:35.780849   32156 system_pods.go:89] "kube-apiserver-multinode-618164" [a1145d9b-2c2a-42b1-bbe6-142472dc9d01] Running
	I0811 23:24:35.780854   32156 system_pods.go:89] "kube-controller-manager-multinode-618164" [41f34044-7115-493f-94d8-53f69fd37242] Running
	I0811 23:24:35.780858   32156 system_pods.go:89] "kube-proxy-9ldtq" [ff783df6-3af7-44cf-bc60-843db8420efa] Running
	I0811 23:24:35.780862   32156 system_pods.go:89] "kube-proxy-glw45" [4616f16f-9566-447c-90cd-8e37c18508e3] Running
	I0811 23:24:35.780868   32156 system_pods.go:89] "kube-proxy-pv5p5" [08e6223f-0c5c-47bd-b37d-67f279f4d4be] Running
	I0811 23:24:35.780872   32156 system_pods.go:89] "kube-scheduler-multinode-618164" [b2a96d9a-e022-4abd-b8c6-e6ec3102773f] Running
	I0811 23:24:35.780878   32156 system_pods.go:89] "storage-provisioner" [84ba55f6-4725-46ae-810f-130cbb82dd7f] Running
	I0811 23:24:35.780883   32156 system_pods.go:126] duration metric: took 209.056156ms to wait for k8s-apps to be running ...
	I0811 23:24:35.780891   32156 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:24:35.780929   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:24:35.795511   32156 system_svc.go:56] duration metric: took 14.610121ms WaitForService to wait for kubelet.
	I0811 23:24:35.795536   32156 kubeadm.go:581] duration metric: took 15.735272927s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:24:35.795553   32156 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:24:35.969004   32156 request.go:628] Waited for 173.360492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0811 23:24:35.969066   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0811 23:24:35.969072   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.969081   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.969099   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.972347   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:35.972371   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.972381   32156 round_trippers.go:580]     Audit-Id: ecf843d8-a83f-4a75-9e0d-626497b2f5fd
	I0811 23:24:35.972395   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.972403   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.972413   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.972423   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.972435   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.972823   32156 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13542 chars]
	I0811 23:24:35.973334   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:35.973350   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:35.973359   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:35.973363   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:35.973366   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:35.973369   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:35.973372   32156 node_conditions.go:105] duration metric: took 177.812858ms to run NodePressure ...
	I0811 23:24:35.973381   32156 start.go:228] waiting for startup goroutines ...
	I0811 23:24:35.973390   32156 start.go:233] waiting for cluster config update ...
	I0811 23:24:35.973396   32156 start.go:242] writing updated cluster config ...
	I0811 23:24:35.973816   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:24:35.973902   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:24:35.976929   32156 out.go:177] * Starting worker node multinode-618164-m02 in cluster multinode-618164
	I0811 23:24:35.978578   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:24:35.978605   32156 cache.go:57] Caching tarball of preloaded images
	I0811 23:24:35.978714   32156 preload.go:174] Found /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0811 23:24:35.978730   32156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0811 23:24:35.978829   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:24:35.978998   32156 start.go:365] acquiring machines lock for multinode-618164-m02: {Name:mk5e6cee1d1e9195cd61b1fff8d9384d7220567d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0811 23:24:35.979041   32156 start.go:369] acquired machines lock for "multinode-618164-m02" in 23.215µs
	I0811 23:24:35.979058   32156 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:24:35.979067   32156 fix.go:54] fixHost starting: m02
	I0811 23:24:35.979362   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:24:35.979386   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:24:35.993765   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0811 23:24:35.994154   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:24:35.994621   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:24:35.994641   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:24:35.994936   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:24:35.995095   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:35.995252   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetState
	I0811 23:24:35.996775   32156 fix.go:102] recreateIfNeeded on multinode-618164-m02: state=Stopped err=<nil>
	I0811 23:24:35.996795   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	W0811 23:24:35.996971   32156 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:24:35.998957   32156 out.go:177] * Restarting existing kvm2 VM for "multinode-618164-m02" ...
	I0811 23:24:36.000530   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .Start
	I0811 23:24:36.000704   32156 main.go:141] libmachine: (multinode-618164-m02) Ensuring networks are active...
	I0811 23:24:36.001375   32156 main.go:141] libmachine: (multinode-618164-m02) Ensuring network default is active
	I0811 23:24:36.001701   32156 main.go:141] libmachine: (multinode-618164-m02) Ensuring network mk-multinode-618164 is active
	I0811 23:24:36.002092   32156 main.go:141] libmachine: (multinode-618164-m02) Getting domain xml...
	I0811 23:24:36.002832   32156 main.go:141] libmachine: (multinode-618164-m02) Creating domain...
	I0811 23:24:37.220070   32156 main.go:141] libmachine: (multinode-618164-m02) Waiting to get IP...
	I0811 23:24:37.220993   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:37.221369   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:37.221470   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:37.221355   32402 retry.go:31] will retry after 277.268435ms: waiting for machine to come up
	I0811 23:24:37.499821   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:37.500295   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:37.500318   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:37.500248   32402 retry.go:31] will retry after 387.190873ms: waiting for machine to come up
	I0811 23:24:37.888587   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:37.889165   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:37.889188   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:37.889136   32402 retry.go:31] will retry after 366.432092ms: waiting for machine to come up
	I0811 23:24:38.256533   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:38.256993   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:38.257024   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:38.256934   32402 retry.go:31] will retry after 391.941627ms: waiting for machine to come up
	I0811 23:24:38.650579   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:38.650997   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:38.651027   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:38.650941   32402 retry.go:31] will retry after 680.694158ms: waiting for machine to come up
	I0811 23:24:39.332856   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:39.333304   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:39.333387   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:39.333268   32402 retry.go:31] will retry after 868.271634ms: waiting for machine to come up
	I0811 23:24:40.203328   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:40.203706   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:40.203748   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:40.203650   32402 retry.go:31] will retry after 997.014712ms: waiting for machine to come up
	I0811 23:24:41.202277   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:41.202642   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:41.202670   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:41.202590   32402 retry.go:31] will retry after 1.410631845s: waiting for machine to come up
	I0811 23:24:42.615487   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:42.615972   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:42.616014   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:42.615931   32402 retry.go:31] will retry after 1.553384999s: waiting for machine to come up
	I0811 23:24:44.171644   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:44.172128   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:44.172154   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:44.172083   32402 retry.go:31] will retry after 2.193325027s: waiting for machine to come up
	I0811 23:24:46.366732   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:46.367241   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:46.367271   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:46.367187   32402 retry.go:31] will retry after 2.303211004s: waiting for machine to come up
	I0811 23:24:48.672552   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:48.673089   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:48.673117   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:48.673037   32402 retry.go:31] will retry after 3.562523492s: waiting for machine to come up
	I0811 23:24:52.237381   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:52.237950   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:52.237976   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:52.237911   32402 retry.go:31] will retry after 3.340176602s: waiting for machine to come up
	I0811 23:24:55.582334   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.582750   32156 main.go:141] libmachine: (multinode-618164-m02) Found IP for machine: 192.168.39.254
	I0811 23:24:55.582782   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has current primary IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.582790   32156 main.go:141] libmachine: (multinode-618164-m02) Reserving static IP address...
	I0811 23:24:55.583220   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "multinode-618164-m02", mac: "52:54:00:d3:12:e8", ip: "192.168.39.254"} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.583243   32156 main.go:141] libmachine: (multinode-618164-m02) Reserved static IP address: 192.168.39.254
	I0811 23:24:55.583255   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | skip adding static IP to network mk-multinode-618164 - found existing host DHCP lease matching {name: "multinode-618164-m02", mac: "52:54:00:d3:12:e8", ip: "192.168.39.254"}
	I0811 23:24:55.583266   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | Getting to WaitForSSH function...
	I0811 23:24:55.583273   32156 main.go:141] libmachine: (multinode-618164-m02) Waiting for SSH to be available...
	I0811 23:24:55.585360   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.585819   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.585852   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.585962   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | Using SSH client type: external
	I0811 23:24:55.585985   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa (-rw-------)
	I0811 23:24:55.586015   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0811 23:24:55.586029   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | About to run SSH command:
	I0811 23:24:55.586045   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | exit 0
	I0811 23:24:55.674828   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | SSH cmd err, output: <nil>: 
	I0811 23:24:55.675253   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetConfigRaw
	I0811 23:24:55.675916   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:24:55.678425   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.678834   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.678875   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.679160   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:24:55.679394   32156 machine.go:88] provisioning docker machine ...
	I0811 23:24:55.679414   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:55.679607   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetMachineName
	I0811 23:24:55.679774   32156 buildroot.go:166] provisioning hostname "multinode-618164-m02"
	I0811 23:24:55.679791   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetMachineName
	I0811 23:24:55.679892   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:55.681946   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.682298   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.682330   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.682431   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:55.682573   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.682733   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.682849   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:55.683015   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:55.683464   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:55.683478   32156 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-618164-m02 && echo "multinode-618164-m02" | sudo tee /etc/hostname
	I0811 23:24:55.817992   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-618164-m02
	
	I0811 23:24:55.818026   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:55.820928   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.821428   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.821472   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.821656   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:55.821835   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.822019   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.822171   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:55.822361   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:55.822766   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:55.822784   32156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-618164-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-618164-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-618164-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:24:55.950900   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:24:55.950935   32156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17044-9593/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-9593/.minikube}
	I0811 23:24:55.950951   32156 buildroot.go:174] setting up certificates
	I0811 23:24:55.950961   32156 provision.go:83] configureAuth start
	I0811 23:24:55.950972   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetMachineName
	I0811 23:24:55.951339   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:24:55.954129   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.954518   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.954546   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.954705   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:55.957036   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.957395   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.957427   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.957524   32156 provision.go:138] copyHostCerts
	I0811 23:24:55.957563   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:24:55.957592   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem, removing ...
	I0811 23:24:55.957601   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:24:55.957661   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem (1078 bytes)
	I0811 23:24:55.957766   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:24:55.957787   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem, removing ...
	I0811 23:24:55.957791   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:24:55.957818   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem (1123 bytes)
	I0811 23:24:55.957860   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:24:55.957874   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem, removing ...
	I0811 23:24:55.957878   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:24:55.957905   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem (1675 bytes)
	I0811 23:24:55.957947   32156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem org=jenkins.multinode-618164-m02 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube multinode-618164-m02]
	I0811 23:24:56.042214   32156 provision.go:172] copyRemoteCerts
	I0811 23:24:56.042266   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:24:56.042285   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.045003   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.045436   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.045470   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.045662   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.045864   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.046035   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.046206   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:56.137954   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:24:56.138021   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0811 23:24:56.162271   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:24:56.162328   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0811 23:24:56.184830   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:24:56.184883   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:24:56.207461   32156 provision.go:86] duration metric: configureAuth took 256.487005ms
	I0811 23:24:56.207492   32156 buildroot.go:189] setting minikube options for container-runtime
	I0811 23:24:56.207719   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:24:56.207746   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:56.208076   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.210511   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.210868   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.210899   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.211053   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.211233   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.211394   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.211516   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.211671   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:56.212043   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:56.212055   32156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 23:24:56.332768   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0811 23:24:56.332790   32156 buildroot.go:70] root file system type: tmpfs
	I0811 23:24:56.332941   32156 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 23:24:56.332964   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.335961   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.336333   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.336364   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.336553   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.336715   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.336902   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.337011   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.337212   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:56.337577   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:56.337635   32156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 23:24:56.467992   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 23:24:56.468034   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.470915   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.471303   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.471324   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.471509   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.471683   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.471840   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.472023   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.472202   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:56.472579   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:56.472597   32156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 23:24:57.318506   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0811 23:24:57.318530   32156 machine.go:91] provisioned docker machine in 1.639122754s
	I0811 23:24:57.318540   32156 start.go:300] post-start starting for "multinode-618164-m02" (driver="kvm2")
	I0811 23:24:57.318549   32156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:24:57.318563   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.318866   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:24:57.318885   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:57.321491   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.321900   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.321931   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.322120   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.322294   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.322465   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.322620   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:57.415404   32156 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:24:57.419740   32156 command_runner.go:130] > NAME=Buildroot
	I0811 23:24:57.419756   32156 command_runner.go:130] > VERSION=2021.02.12-1-gb58903a-dirty
	I0811 23:24:57.419761   32156 command_runner.go:130] > ID=buildroot
	I0811 23:24:57.419766   32156 command_runner.go:130] > VERSION_ID=2021.02.12
	I0811 23:24:57.419771   32156 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0811 23:24:57.419811   32156 info.go:137] Remote host: Buildroot 2021.02.12
	I0811 23:24:57.419823   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/addons for local assets ...
	I0811 23:24:57.419878   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/files for local assets ...
	I0811 23:24:57.419944   32156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> 168362.pem in /etc/ssl/certs
	I0811 23:24:57.419954   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /etc/ssl/certs/168362.pem
	I0811 23:24:57.420027   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:24:57.430951   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:24:57.455810   32156 start.go:303] post-start completed in 137.254169ms
	I0811 23:24:57.455827   32156 fix.go:56] fixHost completed within 21.476760663s
	I0811 23:24:57.455846   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:57.458819   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.459285   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.459319   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.459481   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.459666   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.459880   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.460062   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.460266   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:57.460654   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:57.460674   32156 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0811 23:24:57.580010   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691796297.530057331
	
	I0811 23:24:57.580030   32156 fix.go:206] guest clock: 1691796297.530057331
	I0811 23:24:57.580039   32156 fix.go:219] Guest: 2023-08-11 23:24:57.530057331 +0000 UTC Remote: 2023-08-11 23:24:57.455831086 +0000 UTC m=+84.766041720 (delta=74.226245ms)
	I0811 23:24:57.580058   32156 fix.go:190] guest clock delta is within tolerance: 74.226245ms
	I0811 23:24:57.580063   32156 start.go:83] releasing machines lock for "multinode-618164-m02", held for 21.601011459s
	I0811 23:24:57.580087   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.580383   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:24:57.582794   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.583139   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.583182   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.585603   32156 out.go:177] * Found network options:
	I0811 23:24:57.587391   32156 out.go:177]   - NO_PROXY=192.168.39.6
	W0811 23:24:57.589014   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:24:57.589065   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.589601   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.589779   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.589859   32156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:24:57.589895   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	W0811 23:24:57.589954   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:24:57.590035   32156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:24:57.590057   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:57.592408   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.592824   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.592857   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.592888   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.593056   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.593245   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.593291   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.593320   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.593399   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.593467   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.593544   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:57.593644   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.593790   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.593920   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:57.703458   32156 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:24:57.703747   32156 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0811 23:24:57.703788   32156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0811 23:24:57.703848   32156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:24:57.723142   32156 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0811 23:24:57.725191   32156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:24:57.725205   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:24:57.725317   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:24:57.743962   32156 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0811 23:24:57.744503   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0811 23:24:57.756067   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0811 23:24:57.765986   32156 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0811 23:24:57.766045   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0811 23:24:57.777864   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:24:57.789555   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0811 23:24:57.802111   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:24:57.813823   32156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:24:57.824785   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0811 23:24:57.835526   32156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:24:57.844800   32156 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:24:57.844854   32156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:24:57.854094   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:24:57.959516   32156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0811 23:24:57.977625   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:24:57.977714   32156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0811 23:24:57.996190   32156 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0811 23:24:57.997418   32156 command_runner.go:130] > [Unit]
	I0811 23:24:57.997439   32156 command_runner.go:130] > Description=Docker Application Container Engine
	I0811 23:24:57.997449   32156 command_runner.go:130] > Documentation=https://docs.docker.com
	I0811 23:24:57.997458   32156 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0811 23:24:57.997466   32156 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0811 23:24:57.997475   32156 command_runner.go:130] > StartLimitBurst=3
	I0811 23:24:57.997482   32156 command_runner.go:130] > StartLimitIntervalSec=60
	I0811 23:24:57.997491   32156 command_runner.go:130] > [Service]
	I0811 23:24:57.997497   32156 command_runner.go:130] > Type=notify
	I0811 23:24:57.997504   32156 command_runner.go:130] > Restart=on-failure
	I0811 23:24:57.997508   32156 command_runner.go:130] > Environment=NO_PROXY=192.168.39.6
	I0811 23:24:57.997516   32156 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 23:24:57.997528   32156 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 23:24:57.997542   32156 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 23:24:57.997553   32156 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0811 23:24:57.997568   32156 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 23:24:57.997581   32156 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 23:24:57.997592   32156 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 23:24:57.997603   32156 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 23:24:57.997609   32156 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 23:24:57.997615   32156 command_runner.go:130] > ExecStart=
	I0811 23:24:57.997640   32156 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0811 23:24:57.997656   32156 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 23:24:57.997668   32156 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 23:24:57.997679   32156 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 23:24:57.997689   32156 command_runner.go:130] > LimitNOFILE=infinity
	I0811 23:24:57.997697   32156 command_runner.go:130] > LimitNPROC=infinity
	I0811 23:24:57.997704   32156 command_runner.go:130] > LimitCORE=infinity
	I0811 23:24:57.997710   32156 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0811 23:24:57.997721   32156 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0811 23:24:57.997728   32156 command_runner.go:130] > TasksMax=infinity
	I0811 23:24:57.997735   32156 command_runner.go:130] > TimeoutStartSec=0
	I0811 23:24:57.997750   32156 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 23:24:57.997759   32156 command_runner.go:130] > Delegate=yes
	I0811 23:24:57.997769   32156 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0811 23:24:57.997779   32156 command_runner.go:130] > KillMode=process
	I0811 23:24:57.997789   32156 command_runner.go:130] > [Install]
	I0811 23:24:57.997799   32156 command_runner.go:130] > WantedBy=multi-user.target
	I0811 23:24:57.997935   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:24:58.012870   32156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:24:58.036552   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:24:58.048720   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:24:58.061194   32156 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0811 23:24:58.091338   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:24:58.104438   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:24:58.122668   32156 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0811 23:24:58.122748   32156 ssh_runner.go:195] Run: which cri-dockerd
	I0811 23:24:58.126711   32156 command_runner.go:130] > /usr/bin/cri-dockerd
	I0811 23:24:58.126833   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0811 23:24:58.135972   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0811 23:24:58.151600   32156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0811 23:24:58.254570   32156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0811 23:24:58.362147   32156 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0811 23:24:58.362179   32156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0811 23:24:58.378397   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:24:58.481314   32156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0811 23:24:59.925856   32156 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.444492914s)
	I0811 23:24:59.925935   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:00.032330   32156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0811 23:25:00.140195   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:00.242439   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:00.345593   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0811 23:25:00.361867   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:00.471574   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0811 23:25:00.551023   32156 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0811 23:25:00.551086   32156 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0811 23:25:00.556986   32156 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0811 23:25:00.557008   32156 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0811 23:25:00.557017   32156 command_runner.go:130] > Device: 16h/22d	Inode: 853         Links: 1
	I0811 23:25:00.557028   32156 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0811 23:25:00.557043   32156 command_runner.go:130] > Access: 2023-08-11 23:25:00.435549902 +0000
	I0811 23:25:00.557050   32156 command_runner.go:130] > Modify: 2023-08-11 23:25:00.435549902 +0000
	I0811 23:25:00.557056   32156 command_runner.go:130] > Change: 2023-08-11 23:25:00.437549902 +0000
	I0811 23:25:00.557060   32156 command_runner.go:130] >  Birth: -
	I0811 23:25:00.557116   32156 start.go:534] Will wait 60s for crictl version
	I0811 23:25:00.557156   32156 ssh_runner.go:195] Run: which crictl
	I0811 23:25:00.560727   32156 command_runner.go:130] > /usr/bin/crictl
	I0811 23:25:00.560790   32156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:25:00.604460   32156 command_runner.go:130] > Version:  0.1.0
	I0811 23:25:00.604493   32156 command_runner.go:130] > RuntimeName:  docker
	I0811 23:25:00.604498   32156 command_runner.go:130] > RuntimeVersion:  24.0.4
	I0811 23:25:00.604504   32156 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0811 23:25:00.605908   32156 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0811 23:25:00.605970   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:25:00.634171   32156 command_runner.go:130] > 24.0.4
	I0811 23:25:00.635418   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:25:00.662568   32156 command_runner.go:130] > 24.0.4
	I0811 23:25:00.665312   32156 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0811 23:25:00.667164   32156 out.go:177]   - env NO_PROXY=192.168.39.6
	I0811 23:25:00.669019   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:25:00.671807   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:25:00.672171   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:25:00.672206   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:25:00.672386   32156 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0811 23:25:00.676532   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:25:00.689316   32156 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164 for IP: 192.168.39.254
	I0811 23:25:00.689349   32156 certs.go:190] acquiring lock for shared ca certs: {Name:mke12ed30faa4458f68c7f1069767b7834c8a1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:25:00.689497   32156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key
	I0811 23:25:00.689540   32156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key
	I0811 23:25:00.689554   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:25:00.689568   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:25:00.689580   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:25:00.689590   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:25:00.689644   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem (1338 bytes)
	W0811 23:25:00.689670   32156 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836_empty.pem, impossibly tiny 0 bytes
	I0811 23:25:00.689681   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem (1679 bytes)
	I0811 23:25:00.689703   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem (1078 bytes)
	I0811 23:25:00.689725   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:25:00.689747   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem (1675 bytes)
	I0811 23:25:00.689789   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:25:00.689811   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.689823   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem -> /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.689836   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.690135   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:25:00.715861   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 23:25:00.738775   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:25:00.761747   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0811 23:25:00.784796   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:25:00.807516   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem --> /usr/share/ca-certificates/16836.pem (1338 bytes)
	I0811 23:25:00.830089   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /usr/share/ca-certificates/168362.pem (1708 bytes)
	I0811 23:25:00.853967   32156 ssh_runner.go:195] Run: openssl version
	I0811 23:25:00.859517   32156 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0811 23:25:00.859584   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168362.pem && ln -fs /usr/share/ca-certificates/168362.pem /etc/ssl/certs/168362.pem"
	I0811 23:25:00.869741   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.874542   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.874614   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.874666   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.880067   32156 command_runner.go:130] > 3ec20f2e
	I0811 23:25:00.880258   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168362.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:25:00.890270   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:25:00.900196   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.904853   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.904882   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.904918   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.910034   32156 command_runner.go:130] > b5213941
	I0811 23:25:00.910094   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:25:00.919484   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16836.pem && ln -fs /usr/share/ca-certificates/16836.pem /etc/ssl/certs/16836.pem"
	I0811 23:25:00.930148   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.934711   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.934842   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.934888   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.940182   32156 command_runner.go:130] > 51391683
	I0811 23:25:00.940487   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16836.pem /etc/ssl/certs/51391683.0"
	I0811 23:25:00.950772   32156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:25:00.954768   32156 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:25:00.954803   32156 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:25:00.954880   32156 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0811 23:25:00.981976   32156 command_runner.go:130] > cgroupfs
	I0811 23:25:00.982131   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:25:00.982144   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:25:00.982158   32156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:25:00.982192   32156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-618164 NodeName:multinode-618164-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:25:00.982394   32156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-618164-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:25:00.982484   32156 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-618164-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:25:00.982596   32156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:25:00.992566   32156 command_runner.go:130] > kubeadm
	I0811 23:25:00.992586   32156 command_runner.go:130] > kubectl
	I0811 23:25:00.992592   32156 command_runner.go:130] > kubelet
	I0811 23:25:00.992611   32156 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:25:00.992666   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0811 23:25:01.004031   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0811 23:25:01.021956   32156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:25:01.039960   32156 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0811 23:25:01.044057   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:25:01.056054   32156 host.go:66] Checking if "multinode-618164" exists ...
	I0811 23:25:01.056422   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:25:01.056496   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:25:01.056530   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:25:01.071673   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0811 23:25:01.072083   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:25:01.072625   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:25:01.072644   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:25:01.072958   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:25:01.073142   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:25:01.073338   32156 start.go:301] JoinCluster: &{Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:25:01.073461   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0811 23:25:01.073476   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:25:01.076278   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.076678   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:25:01.076709   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.076873   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:25:01.077028   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:25:01.077195   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:25:01.077360   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:25:01.243353   32156 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sdlilv.uc4mjftwwn2c18uw --discovery-token-ca-cert-hash sha256:bf28045c66954787868571c8676d98e04ae92922baabe0a4e5f5bbb1aa371548 
	I0811 23:25:01.244944   32156 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:01.244979   32156 host.go:66] Checking if "multinode-618164" exists ...
	I0811 23:25:01.245257   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:25:01.245280   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:25:01.259938   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0811 23:25:01.260403   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:25:01.260864   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:25:01.260883   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:25:01.261300   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:25:01.261473   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:25:01.261680   32156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-618164-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0811 23:25:01.261703   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:25:01.264761   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.265249   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:25:01.265275   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.265472   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:25:01.265680   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:25:01.265814   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:25:01.265963   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:25:01.461009   32156 command_runner.go:130] > node/multinode-618164-m02 cordoned
	I0811 23:25:04.501061   32156 command_runner.go:130] > pod "busybox-67b7f59bb-vrdpw" has DeletionTimestamp older than 1 seconds, skipping
	I0811 23:25:04.501221   32156 command_runner.go:130] > node/multinode-618164-m02 drained
	I0811 23:25:04.503146   32156 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0811 23:25:04.503164   32156 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-m2c5t, kube-system/kube-proxy-9ldtq
	I0811 23:25:04.503189   32156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-618164-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.241483996s)
	I0811 23:25:04.503211   32156 node.go:108] successfully drained node "m02"
	I0811 23:25:04.503539   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:25:04.503745   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:04.504023   32156 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0811 23:25:04.504062   32156 round_trippers.go:463] DELETE https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:04.504074   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:04.504081   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:04.504087   32156 round_trippers.go:473]     Content-Type: application/json
	I0811 23:25:04.504093   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:04.509720   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:25:04.509746   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:04.509756   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:04.509762   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:04.509768   32156 round_trippers.go:580]     Content-Length: 171
	I0811 23:25:04.509773   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:04 GMT
	I0811 23:25:04.509779   32156 round_trippers.go:580]     Audit-Id: 9e43768e-f498-44cb-89dc-762be69ad47a
	I0811 23:25:04.509784   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:04.509792   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:04.509823   32156 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-618164-m02","kind":"nodes","uid":"5117de97-d432-4fe0-baad-4ef71b0a5470"}}
	I0811 23:25:04.509905   32156 node.go:124] successfully deleted node "m02"
	I0811 23:25:04.509931   32156 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:04.509958   32156 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:04.509987   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sdlilv.uc4mjftwwn2c18uw --discovery-token-ca-cert-hash sha256:bf28045c66954787868571c8676d98e04ae92922baabe0a4e5f5bbb1aa371548 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-618164-m02"
	I0811 23:25:04.624388   32156 command_runner.go:130] ! W0811 23:25:04.574034    1146 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0811 23:25:04.867134   32156 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:25:06.582453   32156 command_runner.go:130] > [preflight] Running pre-flight checks
	I0811 23:25:06.582482   32156 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0811 23:25:06.582495   32156 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0811 23:25:06.582511   32156 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:25:06.582522   32156 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:25:06.582531   32156 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0811 23:25:06.582542   32156 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0811 23:25:06.582559   32156 command_runner.go:130] > This node has joined the cluster:
	I0811 23:25:06.582575   32156 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0811 23:25:06.582587   32156 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0811 23:25:06.582601   32156 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0811 23:25:06.582624   32156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sdlilv.uc4mjftwwn2c18uw --discovery-token-ca-cert-hash sha256:bf28045c66954787868571c8676d98e04ae92922baabe0a4e5f5bbb1aa371548 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-618164-m02": (2.072621514s)
	I0811 23:25:06.582655   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0811 23:25:06.765410   32156 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0811 23:25:06.918249   32156 start.go:303] JoinCluster complete in 5.844904448s
	I0811 23:25:06.918276   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:25:06.918282   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:25:06.918333   32156 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:25:06.924190   32156 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0811 23:25:06.924215   32156 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0811 23:25:06.924224   32156 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0811 23:25:06.924234   32156 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:25:06.924247   32156 command_runner.go:130] > Access: 2023-08-11 23:23:45.638456579 +0000
	I0811 23:25:06.924258   32156 command_runner.go:130] > Modify: 2023-08-01 03:01:17.000000000 +0000
	I0811 23:25:06.924267   32156 command_runner.go:130] > Change: 2023-08-11 23:23:43.758456579 +0000
	I0811 23:25:06.924274   32156 command_runner.go:130] >  Birth: -
	I0811 23:25:06.924647   32156 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:25:06.924671   32156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:25:06.946592   32156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:25:07.323185   32156 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:25:07.327939   32156 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:25:07.332437   32156 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0811 23:25:07.344644   32156 command_runner.go:130] > daemonset.apps/kindnet configured
	I0811 23:25:07.347483   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:25:07.347741   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:07.348007   32156 round_trippers.go:463] GET https://192.168.39.6:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:25:07.348020   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.348032   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.348040   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.350938   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:07.350953   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.350960   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.350968   32156 round_trippers.go:580]     Content-Length: 291
	I0811 23:25:07.350973   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.350981   32156 round_trippers.go:580]     Audit-Id: fbeaad18-59e9-4540-831e-38b3610091fd
	I0811 23:25:07.350990   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.351004   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.351014   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.351174   32156 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31aef6c0-c84e-4384-9e6e-68f0c22e59ba","resourceVersion":"888","creationTimestamp":"2023-08-11T23:20:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 23:25:07.351269   32156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-618164" context rescaled to 1 replicas
	I0811 23:25:07.351302   32156 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:07.353585   32156 out.go:177] * Verifying Kubernetes components...
	I0811 23:25:07.355080   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:07.385575   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:25:07.385774   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:07.385976   32156 node_ready.go:35] waiting up to 6m0s for node "multinode-618164-m02" to be "Ready" ...
	I0811 23:25:07.386027   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:07.386033   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.386041   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.386049   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.389030   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:07.389056   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.389068   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.389077   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.389087   32156 round_trippers.go:580]     Content-Length: 4030
	I0811 23:25:07.389099   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.389107   32156 round_trippers.go:580]     Audit-Id: 471c9814-2221-4b46-9879-4076ecbff85f
	I0811 23:25:07.389119   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.389131   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.389227   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"948","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I0811 23:25:07.389580   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:07.389597   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.389608   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.389623   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.392669   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:07.392690   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.392700   32156 round_trippers.go:580]     Audit-Id: 175875ca-82b9-4448-a10c-d03144ec513f
	I0811 23:25:07.392709   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.392718   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.392730   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.392742   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.392753   32156 round_trippers.go:580]     Content-Length: 4030
	I0811 23:25:07.392765   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.392810   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"948","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I0811 23:25:07.893637   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:07.893665   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.893677   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.893687   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.900751   32156 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0811 23:25:07.900780   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.900793   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.900803   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.900812   32156 round_trippers.go:580]     Content-Length: 4030
	I0811 23:25:07.900821   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.900836   32156 round_trippers.go:580]     Audit-Id: 54d102be-0404-4b70-a674-e755b192b2c4
	I0811 23:25:07.900845   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.900856   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.900943   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"948","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I0811 23:25:08.393328   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:08.393351   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:08.393359   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:08.393371   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:08.396233   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:08.396258   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:08.396269   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:08 GMT
	I0811 23:25:08.396278   32156 round_trippers.go:580]     Audit-Id: 3c7fca75-a854-46c3-ac44-79264082a673
	I0811 23:25:08.396286   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:08.396294   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:08.396302   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:08.396315   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:08.396885   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:08.893528   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:08.893559   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:08.893567   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:08.893574   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:08.896611   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:08.896637   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:08.896648   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:08.896657   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:08 GMT
	I0811 23:25:08.896666   32156 round_trippers.go:580]     Audit-Id: 6368ceab-0ac5-46ff-ab8f-49e28ded3f7e
	I0811 23:25:08.896674   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:08.896686   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:08.896693   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:08.897080   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:09.393741   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:09.393768   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:09.393776   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:09.393787   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:09.396774   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:09.396800   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:09.396810   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:09.396818   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:09.396826   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:09.396834   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:09.396842   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:09 GMT
	I0811 23:25:09.396849   32156 round_trippers.go:580]     Audit-Id: 798dcfa2-7eb3-45a1-bc0d-b59597c0b9db
	I0811 23:25:09.397106   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:09.397422   32156 node_ready.go:58] node "multinode-618164-m02" has status "Ready":"False"
	I0811 23:25:09.893830   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:09.893853   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:09.893861   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:09.893868   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:09.896883   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:09.896901   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:09.896911   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:09.896921   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:09.896929   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:09.896938   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:09 GMT
	I0811 23:25:09.896948   32156 round_trippers.go:580]     Audit-Id: 3fdae358-13e1-4e53-834a-dcfd607e9e61
	I0811 23:25:09.896956   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:09.897089   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:10.393805   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:10.393827   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:10.393835   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:10.393841   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:10.397254   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:10.397275   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:10.397282   32156 round_trippers.go:580]     Audit-Id: 8aa3aaf2-0bac-4908-b0d7-58d75470f4a8
	I0811 23:25:10.397288   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:10.397293   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:10.397335   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:10.397370   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:10.397380   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:10 GMT
	I0811 23:25:10.397480   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:10.894138   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:10.894166   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:10.894179   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:10.894189   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:10.896893   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:10.896913   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:10.896919   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:10.896925   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:10.896930   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:10.896936   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:10.896941   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:10 GMT
	I0811 23:25:10.896947   32156 round_trippers.go:580]     Audit-Id: a0318b3e-e9c5-4a9c-8e51-8bda17281db1
	I0811 23:25:10.897076   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:11.393608   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:11.393629   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:11.393637   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:11.393649   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:11.396575   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:11.396601   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:11.396612   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:11.396622   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:11.396637   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:11.396650   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:11 GMT
	I0811 23:25:11.396662   32156 round_trippers.go:580]     Audit-Id: ace06413-538f-4e04-b1b3-2bddf01ae167
	I0811 23:25:11.396679   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:11.396841   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:11.893455   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:11.893477   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:11.893486   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:11.893492   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:11.896342   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:11.896363   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:11.896372   32156 round_trippers.go:580]     Audit-Id: b6d50b57-164f-4241-bdb3-7ba59d31e439
	I0811 23:25:11.896381   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:11.896391   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:11.896400   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:11.896410   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:11.896415   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:11 GMT
	I0811 23:25:11.896962   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:11.897198   32156 node_ready.go:58] node "multinode-618164-m02" has status "Ready":"False"
	I0811 23:25:12.393623   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:12.393645   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:12.393653   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:12.393659   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:12.396444   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:12.396467   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:12.396475   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:12.396481   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:12 GMT
	I0811 23:25:12.396490   32156 round_trippers.go:580]     Audit-Id: 3ee8a92d-c349-413f-8555-0c8345e4cf6a
	I0811 23:25:12.396499   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:12.396513   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:12.396521   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:12.396804   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:12.893775   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:12.893803   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:12.893817   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:12.893826   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:12.896956   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:12.896974   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:12.896981   32156 round_trippers.go:580]     Audit-Id: 710d49be-f667-4e15-845b-f361c8c33534
	I0811 23:25:12.896986   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:12.896992   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:12.896997   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:12.897002   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:12.897008   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:12 GMT
	I0811 23:25:12.897278   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:13.393997   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:13.394018   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:13.394026   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:13.394035   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:13.396987   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:13.397014   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:13.397023   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:13.397031   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:13 GMT
	I0811 23:25:13.397039   32156 round_trippers.go:580]     Audit-Id: c61542df-08dc-4262-bc1b-77d08e94ea5d
	I0811 23:25:13.397046   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:13.397053   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:13.397062   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:13.397424   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:13.894143   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:13.894170   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:13.894180   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:13.894189   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:13.897035   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:13.897059   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:13.897067   32156 round_trippers.go:580]     Audit-Id: c9cbd9cd-0056-470f-8a2d-d1dd19a1ae34
	I0811 23:25:13.897072   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:13.897078   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:13.897083   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:13.897088   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:13.897100   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:13 GMT
	I0811 23:25:13.897502   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:13.897746   32156 node_ready.go:58] node "multinode-618164-m02" has status "Ready":"False"
	I0811 23:25:14.394274   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:14.394294   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:14.394308   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:14.394317   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:14.397448   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:14.397469   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:14.397476   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:14.397482   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:14.397489   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:14 GMT
	I0811 23:25:14.397498   32156 round_trippers.go:580]     Audit-Id: f99bf46e-985a-4acf-ad1f-08c3f416c36b
	I0811 23:25:14.397507   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:14.397519   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:14.397603   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:14.894154   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:14.894177   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:14.894185   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:14.894192   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:14.897012   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:14.897037   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:14.897045   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:14.897051   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:14.897057   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:14.897062   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:14 GMT
	I0811 23:25:14.897068   32156 round_trippers.go:580]     Audit-Id: 6cc2af9e-9735-4ac7-b1b2-5ad0ab264d78
	I0811 23:25:14.897076   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:14.897325   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:15.393575   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:15.393597   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.393605   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.393611   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.396566   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.396590   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.396604   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.396614   32156 round_trippers.go:580]     Audit-Id: 4ff194c2-2cb4-4264-bca0-93526a661c22
	I0811 23:25:15.396620   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.396626   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.396631   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.396636   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.396981   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:15.893712   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:15.893743   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.893755   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.893765   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.896571   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.896597   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.896608   32156 round_trippers.go:580]     Audit-Id: 41731806-499a-49ba-9460-e35fd2480c15
	I0811 23:25:15.896617   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.896627   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.896634   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.896643   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.896651   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.896825   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"978","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3373 chars]
	I0811 23:25:15.897161   32156 node_ready.go:49] node "multinode-618164-m02" has status "Ready":"True"
	I0811 23:25:15.897183   32156 node_ready.go:38] duration metric: took 8.511193902s waiting for node "multinode-618164-m02" to be "Ready" ...
	I0811 23:25:15.897195   32156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:15.897275   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:15.897293   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.897303   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.897315   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.901284   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.901302   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.901311   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.901320   32156 round_trippers.go:580]     Audit-Id: 7a3587ac-9b0b-4554-b6d0-11eee67a8dad
	I0811 23:25:15.901329   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.901343   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.901356   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.901373   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.903176   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"978"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83751 chars]
	I0811 23:25:15.905648   32156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.905706   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:25:15.905714   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.905726   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.905734   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.908956   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.908972   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.908981   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.908990   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.909001   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.909015   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.909024   32156 round_trippers.go:580]     Audit-Id: d6a7308f-652e-4eaa-b3e5-6386e018f45d
	I0811 23:25:15.909037   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.909771   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0811 23:25:15.910175   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.910188   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.910198   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.910207   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.912761   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.912780   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.912791   32156 round_trippers.go:580]     Audit-Id: 4f1b80c5-dd9e-4d0d-8481-f6d57d3ac4f5
	I0811 23:25:15.912799   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.912806   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.912814   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.912823   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.912831   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.912947   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.913206   32156 pod_ready.go:92] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.913221   32156 pod_ready.go:81] duration metric: took 7.555485ms waiting for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.913228   32156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.913270   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-618164
	I0811 23:25:15.913278   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.913284   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.913290   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.918898   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:25:15.918913   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.918920   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.918926   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.918931   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.918944   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.918954   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.918962   32156 round_trippers.go:580]     Audit-Id: 7b7b060e-aed5-4a41-9fab-fe7573a0071d
	I0811 23:25:15.919556   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"868","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I0811 23:25:15.919914   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.919920   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.919927   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.919933   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.922982   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.922996   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.923003   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.923008   32156 round_trippers.go:580]     Audit-Id: c812d25f-86f6-46ee-8102-24276fa1d562
	I0811 23:25:15.923016   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.923025   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.923040   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.923050   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.923414   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.923675   32156 pod_ready.go:92] pod "etcd-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.923687   32156 pod_ready.go:81] duration metric: took 10.454198ms waiting for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.923702   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.923739   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-618164
	I0811 23:25:15.923746   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.923753   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.923764   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.925913   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.925927   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.925934   32156 round_trippers.go:580]     Audit-Id: 1d363306-d32e-4b72-8ad0-bc0fe96b8f6b
	I0811 23:25:15.925939   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.925945   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.925953   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.925962   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.925971   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.926233   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-618164","namespace":"kube-system","uid":"a1145d9b-2c2a-42b1-bbe6-142472dc9d01","resourceVersion":"870","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.6:8443","kubernetes.io/config.hash":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.mirror":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.seen":"2023-08-11T23:20:15.427440318Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7597 chars]
	I0811 23:25:15.926575   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.926584   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.926591   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.926596   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.928579   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:25:15.928597   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.928606   32156 round_trippers.go:580]     Audit-Id: be77d699-874e-43b4-8864-aacf648c5177
	I0811 23:25:15.928617   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.928625   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.928637   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.928650   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.928661   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.928828   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.929170   32156 pod_ready.go:92] pod "kube-apiserver-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.929187   32156 pod_ready.go:81] duration metric: took 5.480071ms waiting for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.929195   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.929232   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-618164
	I0811 23:25:15.929240   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.929247   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.929253   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.931219   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:25:15.931233   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.931240   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.931249   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.931255   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.931261   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.931266   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.931274   32156 round_trippers.go:580]     Audit-Id: 75b0e306-c810-44fb-8093-c26601b86a5d
	I0811 23:25:15.931407   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-618164","namespace":"kube-system","uid":"41f34044-7115-493f-94d8-53f69fd37242","resourceVersion":"848","creationTimestamp":"2023-08-11T23:20:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.mirror":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.seen":"2023-08-11T23:20:06.002920339Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7170 chars]
	I0811 23:25:15.932122   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.932143   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.932153   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.932163   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.935309   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.935330   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.935339   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.935348   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.935357   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.935366   32156 round_trippers.go:580]     Audit-Id: 3c5019fc-a508-43fc-97d5-67ed618ae270
	I0811 23:25:15.935380   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.935391   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.935469   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.935726   32156 pod_ready.go:92] pod "kube-controller-manager-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.935738   32156 pod_ready.go:81] duration metric: took 6.537435ms waiting for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.935746   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.093703   32156 request.go:628] Waited for 157.871057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:25:16.093764   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:25:16.093769   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.093776   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.093783   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.096959   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.096985   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.096993   32156 round_trippers.go:580]     Audit-Id: d88fbbe6-3d4b-4920-ab36-983844986cd9
	I0811 23:25:16.096999   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.097004   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.097011   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.097017   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.097023   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.097293   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9ldtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff783df6-3af7-44cf-bc60-843db8420efa","resourceVersion":"954","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0811 23:25:16.294014   32156 request.go:628] Waited for 196.32982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:16.294066   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:16.294071   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.294090   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.294096   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.297532   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.297551   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.297558   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.297564   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.297569   32156 round_trippers.go:580]     Audit-Id: e8c12ee1-02d9-4995-83cb-640bc1424a46
	I0811 23:25:16.297574   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.297582   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.297591   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.297744   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"978","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3373 chars]
	I0811 23:25:16.297982   32156 pod_ready.go:92] pod "kube-proxy-9ldtq" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:16.297994   32156 pod_ready.go:81] duration metric: took 362.24345ms waiting for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.298004   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.494415   32156 request.go:628] Waited for 196.355018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:25:16.494491   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:25:16.494501   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.494512   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.494531   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.497665   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.497684   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.497694   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.497704   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.497723   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.497733   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.497745   32156 round_trippers.go:580]     Audit-Id: ade132a3-b98c-4d7e-9232-60bf828aada0
	I0811 23:25:16.497751   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.497897   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-glw45","generateName":"kube-proxy-","namespace":"kube-system","uid":"4616f16f-9566-447c-90cd-8e37c18508e3","resourceVersion":"843","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I0811 23:25:16.693749   32156 request.go:628] Waited for 195.321196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:16.693801   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:16.693808   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.693820   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.693830   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.696777   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:16.696813   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.696824   32156 round_trippers.go:580]     Audit-Id: 182cef79-e93b-48c1-8920-ab0da0b7ca2b
	I0811 23:25:16.696830   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.696836   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.696841   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.696847   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.696853   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.697091   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:16.697449   32156 pod_ready.go:92] pod "kube-proxy-glw45" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:16.697464   32156 pod_ready.go:81] duration metric: took 399.4554ms waiting for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.697474   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.893847   32156 request.go:628] Waited for 196.313905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:25:16.893915   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:25:16.893920   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.893928   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.893937   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.897219   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.897239   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.897245   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.897251   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.897257   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.897262   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.897268   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.897273   32156 round_trippers.go:580]     Audit-Id: 3fdd3b69-b7f8-48a8-8bea-5b227d3cc66e
	I0811 23:25:16.897458   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pv5p5","generateName":"kube-proxy-","namespace":"kube-system","uid":"08e6223f-0c5c-47bd-b37d-67f279f4d4be","resourceVersion":"961","creationTimestamp":"2023-08-11T23:22:07Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5971 chars]
	I0811 23:25:17.093891   32156 request.go:628] Waited for 196.00394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:25:17.093947   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:25:17.093965   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.093977   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.093987   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.096456   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:17.096474   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.096480   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.096486   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.096491   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.096497   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.096502   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.096508   32156 round_trippers.go:580]     Audit-Id: 8a9732b4-c5f9-40d3-b23c-0edf85a0fe77
	I0811 23:25:17.096733   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m03","uid":"84060722-cb59-478c-9b01-7517a6ae9f59","resourceVersion":"958","creationTimestamp":"2023-08-11T23:22:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3742 chars]
	I0811 23:25:17.097036   32156 pod_ready.go:97] node "multinode-618164-m03" hosting pod "kube-proxy-pv5p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164-m03" has status "Ready":"Unknown"
	I0811 23:25:17.097054   32156 pod_ready.go:81] duration metric: took 399.575386ms waiting for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	E0811 23:25:17.097062   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164-m03" hosting pod "kube-proxy-pv5p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164-m03" has status "Ready":"Unknown"
	I0811 23:25:17.097070   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:17.294578   32156 request.go:628] Waited for 197.422569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:25:17.294643   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:25:17.294650   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.294662   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.294673   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.297453   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:17.297471   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.297478   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.297483   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.297489   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.297494   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.297499   32156 round_trippers.go:580]     Audit-Id: bafeeaaf-2d19-41ba-b27d-364971a80a8f
	I0811 23:25:17.297505   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.297698   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-618164","namespace":"kube-system","uid":"b2a96d9a-e022-4abd-b8c6-e6ec3102773f","resourceVersion":"871","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.mirror":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.seen":"2023-08-11T23:20:15.427437689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I0811 23:25:17.494500   32156 request.go:628] Waited for 196.35493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:17.494562   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:17.494570   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.494582   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.494591   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.497629   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:17.497648   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.497655   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.497661   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.497670   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.497679   32156 round_trippers.go:580]     Audit-Id: b379217e-d58d-4a8e-83af-37d0faef58c0
	I0811 23:25:17.497688   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.497708   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.497865   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:17.498206   32156 pod_ready.go:92] pod "kube-scheduler-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:17.498221   32156 pod_ready.go:81] duration metric: took 401.140427ms waiting for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:17.498231   32156 pod_ready.go:38] duration metric: took 1.601020252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:17.498248   32156 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:25:17.498294   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:17.511290   32156 system_svc.go:56] duration metric: took 13.036483ms WaitForService to wait for kubelet.
	I0811 23:25:17.511311   32156 kubeadm.go:581] duration metric: took 10.15994815s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:25:17.511333   32156 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:25:17.693680   32156 request.go:628] Waited for 182.289745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0811 23:25:17.693729   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0811 23:25:17.693735   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.693744   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.693751   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.697024   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:17.697044   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.697053   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.697061   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.697069   32156 round_trippers.go:580]     Audit-Id: 2ed040bb-4cad-4d2a-bc04-d0e4a9280573
	I0811 23:25:17.697077   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.697085   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.697091   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.697665   32156 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"981"},"items":[{"metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14307 chars]
	I0811 23:25:17.698209   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:25:17.698224   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:17.698234   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:25:17.698237   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:17.698244   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:25:17.698247   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:17.698252   32156 node_conditions.go:105] duration metric: took 186.915638ms to run NodePressure ...
	I0811 23:25:17.698263   32156 start.go:228] waiting for startup goroutines ...
	I0811 23:25:17.698287   32156 start.go:242] writing updated cluster config ...
	I0811 23:25:17.698695   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:25:17.698823   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:25:17.702527   32156 out.go:177] * Starting worker node multinode-618164-m03 in cluster multinode-618164
	I0811 23:25:17.703970   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:25:17.703991   32156 cache.go:57] Caching tarball of preloaded images
	I0811 23:25:17.704072   32156 preload.go:174] Found /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0811 23:25:17.704083   32156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0811 23:25:17.704186   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:25:17.704337   32156 start.go:365] acquiring machines lock for multinode-618164-m03: {Name:mk5e6cee1d1e9195cd61b1fff8d9384d7220567d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0811 23:25:17.704376   32156 start.go:369] acquired machines lock for "multinode-618164-m03" in 20.954µs
	I0811 23:25:17.704389   32156 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:25:17.704393   32156 fix.go:54] fixHost starting: m03
	I0811 23:25:17.704629   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:25:17.704660   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:25:17.719031   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I0811 23:25:17.719507   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:25:17.719966   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:25:17.719988   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:25:17.720350   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:25:17.720543   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:17.720707   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetState
	I0811 23:25:17.722326   32156 fix.go:102] recreateIfNeeded on multinode-618164-m03: state=Stopped err=<nil>
	I0811 23:25:17.724458   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	W0811 23:25:17.724641   32156 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:25:17.726332   32156 out.go:177] * Restarting existing kvm2 VM for "multinode-618164-m03" ...
	I0811 23:25:17.728121   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .Start
	I0811 23:25:17.728331   32156 main.go:141] libmachine: (multinode-618164-m03) Ensuring networks are active...
	I0811 23:25:17.729124   32156 main.go:141] libmachine: (multinode-618164-m03) Ensuring network default is active
	I0811 23:25:17.729469   32156 main.go:141] libmachine: (multinode-618164-m03) Ensuring network mk-multinode-618164 is active
	I0811 23:25:17.729812   32156 main.go:141] libmachine: (multinode-618164-m03) Getting domain xml...
	I0811 23:25:17.730556   32156 main.go:141] libmachine: (multinode-618164-m03) Creating domain...
	I0811 23:25:18.972672   32156 main.go:141] libmachine: (multinode-618164-m03) Waiting to get IP...
	I0811 23:25:18.973569   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:18.973976   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:18.974087   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:18.973983   32576 retry.go:31] will retry after 247.15448ms: waiting for machine to come up
	I0811 23:25:19.222450   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:19.223012   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:19.223045   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:19.222958   32576 retry.go:31] will retry after 320.207163ms: waiting for machine to come up
	I0811 23:25:19.545416   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:19.545806   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:19.545833   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:19.545772   32576 retry.go:31] will retry after 410.907641ms: waiting for machine to come up
	I0811 23:25:19.958311   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:19.958713   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:19.958746   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:19.958619   32576 retry.go:31] will retry after 529.355814ms: waiting for machine to come up
	I0811 23:25:20.489224   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:20.489697   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:20.489739   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:20.489659   32576 retry.go:31] will retry after 530.096222ms: waiting for machine to come up
	I0811 23:25:21.021185   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:21.021706   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:21.021729   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:21.021662   32576 retry.go:31] will retry after 792.292205ms: waiting for machine to come up
	I0811 23:25:21.815693   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:21.816071   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:21.816098   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:21.816019   32576 retry.go:31] will retry after 891.947853ms: waiting for machine to come up
	I0811 23:25:22.709969   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:22.710378   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:22.710404   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:22.710326   32576 retry.go:31] will retry after 1.186793563s: waiting for machine to come up
	I0811 23:25:23.898208   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:23.898777   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:23.898803   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:23.898711   32576 retry.go:31] will retry after 1.371024031s: waiting for machine to come up
	I0811 23:25:25.271009   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:25.271411   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:25.271434   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:25.271373   32576 retry.go:31] will retry after 2.293356428s: waiting for machine to come up
	I0811 23:25:27.566089   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:27.566561   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:27.566589   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:27.566512   32576 retry.go:31] will retry after 2.86191654s: waiting for machine to come up
	I0811 23:25:30.430526   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:30.430948   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:30.430979   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:30.430884   32576 retry.go:31] will retry after 2.696789013s: waiting for machine to come up
	I0811 23:25:33.129055   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:33.129437   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:33.129465   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:33.129382   32576 retry.go:31] will retry after 2.912914856s: waiting for machine to come up
	I0811 23:25:36.045333   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.045895   32156 main.go:141] libmachine: (multinode-618164-m03) Found IP for machine: 192.168.39.21
	I0811 23:25:36.045923   32156 main.go:141] libmachine: (multinode-618164-m03) Reserving static IP address...
	I0811 23:25:36.045950   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has current primary IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.046318   32156 main.go:141] libmachine: (multinode-618164-m03) Reserved static IP address: 192.168.39.21
	I0811 23:25:36.046343   32156 main.go:141] libmachine: (multinode-618164-m03) Waiting for SSH to be available...
	I0811 23:25:36.046365   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "multinode-618164-m03", mac: "52:54:00:f9:60:56", ip: "192.168.39.21"} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.046409   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | skip adding static IP to network mk-multinode-618164 - found existing host DHCP lease matching {name: "multinode-618164-m03", mac: "52:54:00:f9:60:56", ip: "192.168.39.21"}
	I0811 23:25:36.046443   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | Getting to WaitForSSH function...
	I0811 23:25:36.048418   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.048737   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.048769   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.048863   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | Using SSH client type: external
	I0811 23:25:36.048913   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa (-rw-------)
	I0811 23:25:36.048946   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0811 23:25:36.048960   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | About to run SSH command:
	I0811 23:25:36.048969   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | exit 0
	I0811 23:25:36.143723   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | SSH cmd err, output: <nil>: 
	I0811 23:25:36.143998   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetConfigRaw
	I0811 23:25:36.144693   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetIP
	I0811 23:25:36.147146   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.147538   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.147572   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.147863   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:25:36.148048   32156 machine.go:88] provisioning docker machine ...
	I0811 23:25:36.148065   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:36.148332   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetMachineName
	I0811 23:25:36.148485   32156 buildroot.go:166] provisioning hostname "multinode-618164-m03"
	I0811 23:25:36.148504   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetMachineName
	I0811 23:25:36.148718   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.150817   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.151209   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.151241   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.151439   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.151635   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.151808   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.151996   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.152210   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.152601   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.152621   32156 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-618164-m03 && echo "multinode-618164-m03" | sudo tee /etc/hostname
	I0811 23:25:36.293780   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-618164-m03
	
	I0811 23:25:36.293808   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.297049   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.297512   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.297545   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.297709   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.297904   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.298085   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.298222   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.298373   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.298764   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.298787   32156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-618164-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-618164-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-618164-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:25:36.433352   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:25:36.433379   32156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17044-9593/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-9593/.minikube}
	I0811 23:25:36.433400   32156 buildroot.go:174] setting up certificates
	I0811 23:25:36.433409   32156 provision.go:83] configureAuth start
	I0811 23:25:36.433420   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetMachineName
	I0811 23:25:36.433718   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetIP
	I0811 23:25:36.436594   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.436937   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.436971   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.437106   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.439230   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.439550   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.439579   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.439671   32156 provision.go:138] copyHostCerts
	I0811 23:25:36.439709   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:25:36.439748   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem, removing ...
	I0811 23:25:36.439760   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:25:36.439831   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem (1078 bytes)
	I0811 23:25:36.439904   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:25:36.439921   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem, removing ...
	I0811 23:25:36.439929   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:25:36.439952   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem (1123 bytes)
	I0811 23:25:36.439993   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:25:36.440008   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem, removing ...
	I0811 23:25:36.440014   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:25:36.440034   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem (1675 bytes)
	I0811 23:25:36.440096   32156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem org=jenkins.multinode-618164-m03 san=[192.168.39.21 192.168.39.21 localhost 127.0.0.1 minikube multinode-618164-m03]
	I0811 23:25:36.501259   32156 provision.go:172] copyRemoteCerts
	I0811 23:25:36.501310   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:25:36.501330   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.504009   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.504432   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.504465   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.504639   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.504800   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.504964   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.505060   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:36.596832   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:25:36.596906   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0811 23:25:36.621591   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:25:36.621657   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0811 23:25:36.644337   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:25:36.644396   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:25:36.668335   32156 provision.go:86] duration metric: configureAuth took 234.912237ms
	I0811 23:25:36.668361   32156 buildroot.go:189] setting minikube options for container-runtime
	I0811 23:25:36.668554   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:25:36.668575   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:36.668832   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.671119   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.671514   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.671579   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.671669   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.671923   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.672119   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.672335   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.672567   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.673055   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.673071   32156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 23:25:36.798036   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0811 23:25:36.798062   32156 buildroot.go:70] root file system type: tmpfs
	I0811 23:25:36.798178   32156 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 23:25:36.798200   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.801022   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.801362   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.801391   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.801528   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.801739   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.801915   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.802063   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.802210   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.802566   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.802649   32156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.6"
	Environment="NO_PROXY=192.168.39.6,192.168.39.254"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 23:25:36.940093   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.6
	Environment=NO_PROXY=192.168.39.6,192.168.39.254
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 23:25:36.940130   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.943126   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.943512   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.943546   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.943750   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.943935   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.944142   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.944307   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.944493   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.945117   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.945149   32156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 23:25:37.838733   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0811 23:25:37.838759   32156 machine.go:91] provisioned docker machine in 1.690697728s
	I0811 23:25:37.838769   32156 start.go:300] post-start starting for "multinode-618164-m03" (driver="kvm2")
	I0811 23:25:37.838778   32156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:25:37.838796   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:37.839181   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:25:37.839216   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:37.841673   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.842079   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:37.842111   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.842251   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:37.842440   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:37.842680   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:37.842835   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:37.937129   32156 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:25:37.941409   32156 command_runner.go:130] > NAME=Buildroot
	I0811 23:25:37.941430   32156 command_runner.go:130] > VERSION=2021.02.12-1-gb58903a-dirty
	I0811 23:25:37.941437   32156 command_runner.go:130] > ID=buildroot
	I0811 23:25:37.941445   32156 command_runner.go:130] > VERSION_ID=2021.02.12
	I0811 23:25:37.941452   32156 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0811 23:25:37.941635   32156 info.go:137] Remote host: Buildroot 2021.02.12
	I0811 23:25:37.941651   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/addons for local assets ...
	I0811 23:25:37.941708   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/files for local assets ...
	I0811 23:25:37.941797   32156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> 168362.pem in /etc/ssl/certs
	I0811 23:25:37.941809   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /etc/ssl/certs/168362.pem
	I0811 23:25:37.941890   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:25:37.951136   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:25:37.972839   32156 start.go:303] post-start completed in 134.057637ms
	I0811 23:25:37.972859   32156 fix.go:56] fixHost completed within 20.268465262s
	I0811 23:25:37.972880   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:37.975862   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.976279   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:37.976308   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.976445   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:37.976635   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:37.976789   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:37.976944   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:37.977100   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:37.977480   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:37.977491   32156 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0811 23:25:38.104188   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691796338.052995938
	
	I0811 23:25:38.104213   32156 fix.go:206] guest clock: 1691796338.052995938
	I0811 23:25:38.104238   32156 fix.go:219] Guest: 2023-08-11 23:25:38.052995938 +0000 UTC Remote: 2023-08-11 23:25:37.972862052 +0000 UTC m=+125.283072685 (delta=80.133886ms)
	I0811 23:25:38.104257   32156 fix.go:190] guest clock delta is within tolerance: 80.133886ms
	I0811 23:25:38.104262   32156 start.go:83] releasing machines lock for "multinode-618164-m03", held for 20.399878116s
	I0811 23:25:38.104279   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.104576   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetIP
	I0811 23:25:38.107197   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.107628   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:38.107650   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.109952   32156 out.go:177] * Found network options:
	I0811 23:25:38.111798   32156 out.go:177]   - NO_PROXY=192.168.39.6,192.168.39.254
	W0811 23:25:38.113479   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	W0811 23:25:38.113500   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:25:38.113513   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.114070   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.114262   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.114348   32156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:25:38.114385   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	W0811 23:25:38.114478   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	W0811 23:25:38.114501   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:25:38.114558   32156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:25:38.114573   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:38.117304   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.117690   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:38.117719   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.117744   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.117866   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:38.118061   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:38.118233   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:38.118233   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:38.118300   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.118395   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:38.118408   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:38.118555   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:38.118692   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:38.118819   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:38.214388   32156 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0811 23:25:38.214435   32156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0811 23:25:38.214539   32156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:25:38.239706   32156 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:25:38.240636   32156 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0811 23:25:38.240659   32156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:25:38.240668   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:25:38.240779   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:25:38.258654   32156 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0811 23:25:38.258755   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0811 23:25:38.268907   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0811 23:25:38.279426   32156 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0811 23:25:38.279494   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0811 23:25:38.289572   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:25:38.299314   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0811 23:25:38.309114   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:25:38.318624   32156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:25:38.328572   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0811 23:25:38.338237   32156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:25:38.346331   32156 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:25:38.346394   32156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:25:38.354327   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:38.457471   32156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0811 23:25:38.476104   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:25:38.476184   32156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0811 23:25:38.495179   32156 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0811 23:25:38.495561   32156 command_runner.go:130] > [Unit]
	I0811 23:25:38.495584   32156 command_runner.go:130] > Description=Docker Application Container Engine
	I0811 23:25:38.495593   32156 command_runner.go:130] > Documentation=https://docs.docker.com
	I0811 23:25:38.495602   32156 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0811 23:25:38.495610   32156 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0811 23:25:38.495621   32156 command_runner.go:130] > StartLimitBurst=3
	I0811 23:25:38.495630   32156 command_runner.go:130] > StartLimitIntervalSec=60
	I0811 23:25:38.495636   32156 command_runner.go:130] > [Service]
	I0811 23:25:38.495646   32156 command_runner.go:130] > Type=notify
	I0811 23:25:38.495652   32156 command_runner.go:130] > Restart=on-failure
	I0811 23:25:38.495659   32156 command_runner.go:130] > Environment=NO_PROXY=192.168.39.6
	I0811 23:25:38.495676   32156 command_runner.go:130] > Environment=NO_PROXY=192.168.39.6,192.168.39.254
	I0811 23:25:38.495692   32156 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 23:25:38.495709   32156 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 23:25:38.495737   32156 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 23:25:38.495751   32156 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0811 23:25:38.495765   32156 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 23:25:38.495779   32156 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 23:25:38.495833   32156 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 23:25:38.495852   32156 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 23:25:38.495866   32156 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 23:25:38.495876   32156 command_runner.go:130] > ExecStart=
	I0811 23:25:38.495903   32156 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0811 23:25:38.495916   32156 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 23:25:38.495930   32156 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 23:25:38.495944   32156 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 23:25:38.495953   32156 command_runner.go:130] > LimitNOFILE=infinity
	I0811 23:25:38.495960   32156 command_runner.go:130] > LimitNPROC=infinity
	I0811 23:25:38.495969   32156 command_runner.go:130] > LimitCORE=infinity
	I0811 23:25:38.495978   32156 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0811 23:25:38.495989   32156 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0811 23:25:38.495999   32156 command_runner.go:130] > TasksMax=infinity
	I0811 23:25:38.496005   32156 command_runner.go:130] > TimeoutStartSec=0
	I0811 23:25:38.496018   32156 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 23:25:38.496026   32156 command_runner.go:130] > Delegate=yes
	I0811 23:25:38.496046   32156 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0811 23:25:38.496055   32156 command_runner.go:130] > KillMode=process
	I0811 23:25:38.496061   32156 command_runner.go:130] > [Install]
	I0811 23:25:38.496069   32156 command_runner.go:130] > WantedBy=multi-user.target
	I0811 23:25:38.496303   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:25:38.514306   32156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:25:38.534347   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:25:38.546429   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:25:38.557721   32156 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0811 23:25:38.591792   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:25:38.605657   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:25:38.624649   32156 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0811 23:25:38.625095   32156 ssh_runner.go:195] Run: which cri-dockerd
	I0811 23:25:38.628675   32156 command_runner.go:130] > /usr/bin/cri-dockerd
	I0811 23:25:38.628776   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0811 23:25:38.637446   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0811 23:25:38.655221   32156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0811 23:25:38.757647   32156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0811 23:25:38.866252   32156 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0811 23:25:38.866289   32156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0811 23:25:38.883536   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:39.000609   32156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0811 23:25:40.459788   32156 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.45914496s)
	I0811 23:25:40.459842   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:40.571329   32156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0811 23:25:40.695944   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:40.813305   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:40.926702   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0811 23:25:40.942637   32156 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0811 23:25:40.945410   32156 out.go:177] 
	W0811 23:25:40.946925   32156 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0811 23:25:40.946942   32156 out.go:239] * 
	* 
	W0811 23:25:40.947740   32156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 23:25:40.949375   32156 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-618164" : exit status 90
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-618164
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-618164 -n multinode-618164
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-618164 logs -n 25: (1.404408379s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp multinode-618164-m02:/home/docker/cp-test.txt                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3974164346/001/cp-test_multinode-618164-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp multinode-618164-m02:/home/docker/cp-test.txt                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164:/home/docker/cp-test_multinode-618164-m02_multinode-618164.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n multinode-618164 sudo cat                                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-618164-m02_multinode-618164.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp multinode-618164-m02:/home/docker/cp-test.txt                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m03:/home/docker/cp-test_multinode-618164-m02_multinode-618164-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n multinode-618164-m03 sudo cat                                   | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-618164-m02_multinode-618164-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp testdata/cp-test.txt                                                | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp multinode-618164-m03:/home/docker/cp-test.txt                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3974164346/001/cp-test_multinode-618164-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp multinode-618164-m03:/home/docker/cp-test.txt                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164:/home/docker/cp-test_multinode-618164-m03_multinode-618164.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n multinode-618164 sudo cat                                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-618164-m03_multinode-618164.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-618164 cp multinode-618164-m03:/home/docker/cp-test.txt                       | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m02:/home/docker/cp-test_multinode-618164-m03_multinode-618164-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n                                                                 | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | multinode-618164-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-618164 ssh -n multinode-618164-m02 sudo cat                                   | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-618164-m03_multinode-618164-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-618164 node stop m03                                                          | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:22 UTC |
	| node    | multinode-618164 node start                                                             | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:22 UTC | 11 Aug 23 23:23 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-618164                                                                | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:23 UTC |                     |
	| stop    | -p multinode-618164                                                                     | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:23 UTC | 11 Aug 23 23:23 UTC |
	| start   | -p multinode-618164                                                                     | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:23 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-618164                                                                | multinode-618164 | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:23:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:23:32.722173   32156 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:23:32.722281   32156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:23:32.722294   32156 out.go:309] Setting ErrFile to fd 2...
	I0811 23:23:32.722298   32156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:23:32.722512   32156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	I0811 23:23:32.723027   32156 out.go:303] Setting JSON to false
	I0811 23:23:32.723899   32156 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3967,"bootTime":1691792246,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:23:32.723952   32156 start.go:138] virtualization: kvm guest
	I0811 23:23:32.727353   32156 out.go:177] * [multinode-618164] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0811 23:23:32.729163   32156 notify.go:220] Checking for updates...
	I0811 23:23:32.729177   32156 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:23:32.730904   32156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:23:32.732568   32156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:23:32.734361   32156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:23:32.735936   32156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0811 23:23:32.737453   32156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:23:32.739333   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:23:32.739432   32156 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:23:32.739796   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:23:32.739843   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:23:32.753729   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45667
	I0811 23:23:32.754115   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:23:32.754720   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:23:32.754748   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:23:32.755035   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:23:32.755226   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:32.789495   32156 out.go:177] * Using the kvm2 driver based on existing profile
	I0811 23:23:32.791161   32156 start.go:298] selected driver: kvm2
	I0811 23:23:32.791189   32156 start.go:901] validating driver "kvm2" against &{Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:23:32.791301   32156 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:23:32.791591   32156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:23:32.791655   32156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17044-9593/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0811 23:23:32.806055   32156 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.1
	I0811 23:23:32.806713   32156 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 23:23:32.806758   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:23:32.806766   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:23:32.806777   32156 start_flags.go:319] config:
	{Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0811 23:23:32.806969   32156 iso.go:125] acquiring lock: {Name:mkbb435ea885d9d203ce0113f8005e4b53bc59ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:23:32.808986   32156 out.go:177] * Starting control plane node multinode-618164 in cluster multinode-618164
	I0811 23:23:32.810394   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:23:32.810441   32156 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4
	I0811 23:23:32.810460   32156 cache.go:57] Caching tarball of preloaded images
	I0811 23:23:32.810544   32156 preload.go:174] Found /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0811 23:23:32.810557   32156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0811 23:23:32.810731   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:23:32.810951   32156 start.go:365] acquiring machines lock for multinode-618164: {Name:mk5e6cee1d1e9195cd61b1fff8d9384d7220567d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0811 23:23:32.811005   32156 start.go:369] acquired machines lock for "multinode-618164" in 32.003µs
	I0811 23:23:32.811026   32156 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:23:32.811039   32156 fix.go:54] fixHost starting: 
	I0811 23:23:32.811341   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:23:32.811377   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:23:32.825189   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0811 23:23:32.825607   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:23:32.826297   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:23:32.826317   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:23:32.826651   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:23:32.826809   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:32.826933   32156 main.go:141] libmachine: (multinode-618164) Calling .GetState
	I0811 23:23:32.828364   32156 fix.go:102] recreateIfNeeded on multinode-618164: state=Stopped err=<nil>
	I0811 23:23:32.828391   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	W0811 23:23:32.828569   32156 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:23:32.831881   32156 out.go:177] * Restarting existing kvm2 VM for "multinode-618164" ...
	I0811 23:23:32.833637   32156 main.go:141] libmachine: (multinode-618164) Calling .Start
	I0811 23:23:32.833821   32156 main.go:141] libmachine: (multinode-618164) Ensuring networks are active...
	I0811 23:23:32.834601   32156 main.go:141] libmachine: (multinode-618164) Ensuring network default is active
	I0811 23:23:32.834951   32156 main.go:141] libmachine: (multinode-618164) Ensuring network mk-multinode-618164 is active
	I0811 23:23:32.835359   32156 main.go:141] libmachine: (multinode-618164) Getting domain xml...
	I0811 23:23:32.836112   32156 main.go:141] libmachine: (multinode-618164) Creating domain...
	I0811 23:23:34.036415   32156 main.go:141] libmachine: (multinode-618164) Waiting to get IP...
	I0811 23:23:34.037475   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.037865   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.037955   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.037865   32185 retry.go:31] will retry after 250.674646ms: waiting for machine to come up
	I0811 23:23:34.290585   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.291097   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.291139   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.291053   32185 retry.go:31] will retry after 298.664709ms: waiting for machine to come up
	I0811 23:23:34.591686   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.592087   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.592116   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.592054   32185 retry.go:31] will retry after 344.854456ms: waiting for machine to come up
	I0811 23:23:34.938436   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:34.938925   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:34.938950   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:34.938853   32185 retry.go:31] will retry after 465.356896ms: waiting for machine to come up
	I0811 23:23:35.405439   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:35.405855   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:35.405876   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:35.405839   32185 retry.go:31] will retry after 468.026827ms: waiting for machine to come up
	I0811 23:23:35.874905   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:35.875325   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:35.875355   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:35.875269   32185 retry.go:31] will retry after 688.85699ms: waiting for machine to come up
	I0811 23:23:36.566140   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:36.566553   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:36.566584   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:36.566507   32185 retry.go:31] will retry after 978.359324ms: waiting for machine to come up
	I0811 23:23:37.546660   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:37.547122   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:37.547151   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:37.547050   32185 retry.go:31] will retry after 1.294102807s: waiting for machine to come up
	I0811 23:23:38.842673   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:38.843078   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:38.843112   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:38.843031   32185 retry.go:31] will retry after 1.213055571s: waiting for machine to come up
	I0811 23:23:40.058237   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:40.058595   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:40.058619   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:40.058554   32185 retry.go:31] will retry after 1.75151759s: waiting for machine to come up
	I0811 23:23:41.812537   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:41.812837   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:41.812873   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:41.812810   32185 retry.go:31] will retry after 1.77396365s: waiting for machine to come up
	I0811 23:23:43.588031   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:43.588526   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:43.588569   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:43.588493   32185 retry.go:31] will retry after 3.271610328s: waiting for machine to come up
	I0811 23:23:46.863065   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:46.863556   32156 main.go:141] libmachine: (multinode-618164) DBG | unable to find current IP address of domain multinode-618164 in network mk-multinode-618164
	I0811 23:23:46.863579   32156 main.go:141] libmachine: (multinode-618164) DBG | I0811 23:23:46.863520   32185 retry.go:31] will retry after 4.415362505s: waiting for machine to come up
	I0811 23:23:51.283014   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.283528   32156 main.go:141] libmachine: (multinode-618164) Found IP for machine: 192.168.39.6
	I0811 23:23:51.283574   32156 main.go:141] libmachine: (multinode-618164) Reserving static IP address...
	I0811 23:23:51.283610   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has current primary IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.283984   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "multinode-618164", mac: "52:54:00:ac:97:b5", ip: "192.168.39.6"} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.284028   32156 main.go:141] libmachine: (multinode-618164) DBG | skip adding static IP to network mk-multinode-618164 - found existing host DHCP lease matching {name: "multinode-618164", mac: "52:54:00:ac:97:b5", ip: "192.168.39.6"}
	I0811 23:23:51.284039   32156 main.go:141] libmachine: (multinode-618164) Reserved static IP address: 192.168.39.6
	I0811 23:23:51.284051   32156 main.go:141] libmachine: (multinode-618164) DBG | Getting to WaitForSSH function...
	I0811 23:23:51.284075   32156 main.go:141] libmachine: (multinode-618164) Waiting for SSH to be available...
	I0811 23:23:51.285884   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.286217   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.286255   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.286353   32156 main.go:141] libmachine: (multinode-618164) DBG | Using SSH client type: external
	I0811 23:23:51.286384   32156 main.go:141] libmachine: (multinode-618164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa (-rw-------)
	I0811 23:23:51.286417   32156 main.go:141] libmachine: (multinode-618164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0811 23:23:51.286428   32156 main.go:141] libmachine: (multinode-618164) DBG | About to run SSH command:
	I0811 23:23:51.286436   32156 main.go:141] libmachine: (multinode-618164) DBG | exit 0
	I0811 23:23:51.379359   32156 main.go:141] libmachine: (multinode-618164) DBG | SSH cmd err, output: <nil>: 
	I0811 23:23:51.379772   32156 main.go:141] libmachine: (multinode-618164) Calling .GetConfigRaw
	I0811 23:23:51.380347   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:51.382832   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.383264   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.383303   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.383597   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:23:51.383766   32156 machine.go:88] provisioning docker machine ...
	I0811 23:23:51.383780   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:51.383996   32156 main.go:141] libmachine: (multinode-618164) Calling .GetMachineName
	I0811 23:23:51.384173   32156 buildroot.go:166] provisioning hostname "multinode-618164"
	I0811 23:23:51.384192   32156 main.go:141] libmachine: (multinode-618164) Calling .GetMachineName
	I0811 23:23:51.384352   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.386674   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.387064   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.387095   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.387262   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.387423   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.387565   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.387682   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.387844   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:51.388302   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:51.388324   32156 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-618164 && echo "multinode-618164" | sudo tee /etc/hostname
	I0811 23:23:51.520050   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-618164
	
	I0811 23:23:51.520086   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.523082   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.523564   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.523595   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.523715   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.523934   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.524094   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.524268   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.524454   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:51.524834   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:51.524851   32156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-618164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-618164/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-618164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:23:51.657368   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:23:51.657397   32156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17044-9593/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-9593/.minikube}
	I0811 23:23:51.657452   32156 buildroot.go:174] setting up certificates
	I0811 23:23:51.657467   32156 provision.go:83] configureAuth start
	I0811 23:23:51.657480   32156 main.go:141] libmachine: (multinode-618164) Calling .GetMachineName
	I0811 23:23:51.657779   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:51.660466   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.660823   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.660855   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.661021   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.663049   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.663440   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.663476   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.663588   32156 provision.go:138] copyHostCerts
	I0811 23:23:51.663629   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:23:51.663671   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem, removing ...
	I0811 23:23:51.663680   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:23:51.663763   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem (1078 bytes)
	I0811 23:23:51.663874   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:23:51.663900   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem, removing ...
	I0811 23:23:51.663907   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:23:51.663950   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem (1123 bytes)
	I0811 23:23:51.664023   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:23:51.664045   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem, removing ...
	I0811 23:23:51.664050   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:23:51.664084   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem (1675 bytes)
	I0811 23:23:51.664157   32156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem org=jenkins.multinode-618164 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube multinode-618164]
	I0811 23:23:51.759895   32156 provision.go:172] copyRemoteCerts
	I0811 23:23:51.759959   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:23:51.759985   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.762635   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.762991   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.763026   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.763290   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.763487   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.763674   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.763847   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:51.852641   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:23:51.852720   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0811 23:23:51.878843   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:23:51.878911   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0811 23:23:51.904738   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:23:51.904819   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:23:51.930193   32156 provision.go:86] duration metric: configureAuth took 272.712825ms
	I0811 23:23:51.930229   32156 buildroot.go:189] setting minikube options for container-runtime
	I0811 23:23:51.930438   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:23:51.930521   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:51.930793   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:51.933463   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.933835   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:51.933860   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:51.934016   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:51.934192   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.934362   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:51.934543   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:51.934740   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:51.935138   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:51.935152   32156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 23:23:52.056995   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0811 23:23:52.057017   32156 buildroot.go:70] root file system type: tmpfs
	I0811 23:23:52.057140   32156 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 23:23:52.057163   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:52.060121   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.060522   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:52.060557   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.060692   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:52.060900   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.061113   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.061313   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:52.061520   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:52.062103   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:52.062200   32156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 23:23:52.195958   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 23:23:52.195988   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:52.198688   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.199053   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:52.199074   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:52.199282   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:52.199470   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.199636   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:52.199779   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:52.199906   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:52.200284   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:52.200307   32156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 23:23:53.071780   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0811 23:23:53.071817   32156 machine.go:91] provisioned docker machine in 1.688040811s
	I0811 23:23:53.071826   32156 start.go:300] post-start starting for "multinode-618164" (driver="kvm2")
	I0811 23:23:53.071834   32156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:23:53.071853   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.072202   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:23:53.072224   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.074823   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.075153   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.075186   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.075316   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.075502   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.075638   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.075760   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:53.164782   32156 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:23:53.168913   32156 command_runner.go:130] > NAME=Buildroot
	I0811 23:23:53.168930   32156 command_runner.go:130] > VERSION=2021.02.12-1-gb58903a-dirty
	I0811 23:23:53.168936   32156 command_runner.go:130] > ID=buildroot
	I0811 23:23:53.168944   32156 command_runner.go:130] > VERSION_ID=2021.02.12
	I0811 23:23:53.168950   32156 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0811 23:23:53.168984   32156 info.go:137] Remote host: Buildroot 2021.02.12
	I0811 23:23:53.168997   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/addons for local assets ...
	I0811 23:23:53.169057   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/files for local assets ...
	I0811 23:23:53.169150   32156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> 168362.pem in /etc/ssl/certs
	I0811 23:23:53.169164   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /etc/ssl/certs/168362.pem
	I0811 23:23:53.169262   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:23:53.177591   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:23:53.200087   32156 start.go:303] post-start completed in 128.247996ms
	I0811 23:23:53.200108   32156 fix.go:56] fixHost completed within 20.389073179s
	I0811 23:23:53.200136   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.203019   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.203417   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.203444   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.203600   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.203829   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.204071   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.204251   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.204461   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:23:53.204868   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0811 23:23:53.204884   32156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0811 23:23:53.328309   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691796233.277880091
	
	I0811 23:23:53.328348   32156 fix.go:206] guest clock: 1691796233.277880091
	I0811 23:23:53.328355   32156 fix.go:219] Guest: 2023-08-11 23:23:53.277880091 +0000 UTC Remote: 2023-08-11 23:23:53.20011316 +0000 UTC m=+20.510323801 (delta=77.766931ms)
	I0811 23:23:53.328381   32156 fix.go:190] guest clock delta is within tolerance: 77.766931ms
	I0811 23:23:53.328386   32156 start.go:83] releasing machines lock for "multinode-618164", held for 20.517369844s
	I0811 23:23:53.328407   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.328685   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:53.331421   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.331764   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.331792   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.331943   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.332514   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.332699   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:23:53.332775   32156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:23:53.332823   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.332956   32156 ssh_runner.go:195] Run: cat /version.json
	I0811 23:23:53.332982   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:23:53.335410   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335468   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335829   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.335869   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:53.335888   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335911   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:53.335979   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.336078   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:23:53.336152   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.336208   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:23:53.336358   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.336359   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:23:53.336526   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:53.336539   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:23:53.419787   32156 command_runner.go:130] > {"iso_version": "v1.31.0-1690838458-16971", "kicbase_version": "v0.0.40-1690799191-16971", "minikube_version": "v1.31.1", "commit": "29dfb44a8786625102cff167b7adaa8f8ef2d500"}
	I0811 23:23:53.419957   32156 ssh_runner.go:195] Run: systemctl --version
	I0811 23:23:53.446524   32156 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:23:53.446577   32156 command_runner.go:130] > systemd 247 (247)
	I0811 23:23:53.446602   32156 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0811 23:23:53.446675   32156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:23:53.452149   32156 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0811 23:23:53.452181   32156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0811 23:23:53.452244   32156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:23:53.467021   32156 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0811 23:23:53.467055   32156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:23:53.467067   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:23:53.467195   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:23:53.482878   32156 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0811 23:23:53.483267   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0811 23:23:53.492232   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0811 23:23:53.501066   32156 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0811 23:23:53.501126   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0811 23:23:53.510089   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:23:53.519076   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0811 23:23:53.528146   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:23:53.537240   32156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:23:53.546612   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0811 23:23:53.555662   32156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:23:53.563978   32156 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:23:53.564054   32156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:23:53.572317   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:53.671843   32156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0811 23:23:53.687731   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:23:53.687811   32156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0811 23:23:53.702064   32156 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0811 23:23:53.702088   32156 command_runner.go:130] > [Unit]
	I0811 23:23:53.702099   32156 command_runner.go:130] > Description=Docker Application Container Engine
	I0811 23:23:53.702108   32156 command_runner.go:130] > Documentation=https://docs.docker.com
	I0811 23:23:53.702116   32156 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0811 23:23:53.702121   32156 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0811 23:23:53.702127   32156 command_runner.go:130] > StartLimitBurst=3
	I0811 23:23:53.702133   32156 command_runner.go:130] > StartLimitIntervalSec=60
	I0811 23:23:53.702139   32156 command_runner.go:130] > [Service]
	I0811 23:23:53.702145   32156 command_runner.go:130] > Type=notify
	I0811 23:23:53.702150   32156 command_runner.go:130] > Restart=on-failure
	I0811 23:23:53.702167   32156 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 23:23:53.702181   32156 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 23:23:53.702197   32156 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 23:23:53.702210   32156 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0811 23:23:53.702224   32156 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 23:23:53.702239   32156 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 23:23:53.702257   32156 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 23:23:53.702275   32156 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 23:23:53.702289   32156 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 23:23:53.702297   32156 command_runner.go:130] > ExecStart=
	I0811 23:23:53.702326   32156 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0811 23:23:53.702341   32156 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 23:23:53.702354   32156 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 23:23:53.702368   32156 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 23:23:53.702378   32156 command_runner.go:130] > LimitNOFILE=infinity
	I0811 23:23:53.702387   32156 command_runner.go:130] > LimitNPROC=infinity
	I0811 23:23:53.702397   32156 command_runner.go:130] > LimitCORE=infinity
	I0811 23:23:53.702409   32156 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0811 23:23:53.702417   32156 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0811 23:23:53.702426   32156 command_runner.go:130] > TasksMax=infinity
	I0811 23:23:53.702436   32156 command_runner.go:130] > TimeoutStartSec=0
	I0811 23:23:53.702451   32156 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 23:23:53.702462   32156 command_runner.go:130] > Delegate=yes
	I0811 23:23:53.702475   32156 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0811 23:23:53.702485   32156 command_runner.go:130] > KillMode=process
	I0811 23:23:53.702491   32156 command_runner.go:130] > [Install]
	I0811 23:23:53.702502   32156 command_runner.go:130] > WantedBy=multi-user.target
	I0811 23:23:53.702568   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:23:53.717114   32156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:23:53.733138   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:23:53.744968   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:23:53.756483   32156 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0811 23:23:53.783422   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:23:53.795905   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:23:53.812681   32156 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0811 23:23:53.813092   32156 ssh_runner.go:195] Run: which cri-dockerd
	I0811 23:23:53.816651   32156 command_runner.go:130] > /usr/bin/cri-dockerd
	I0811 23:23:53.816744   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0811 23:23:53.825824   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0811 23:23:53.841306   32156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0811 23:23:53.953526   32156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0811 23:23:54.065429   32156 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0811 23:23:54.065459   32156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0811 23:23:54.081837   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:54.182796   32156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0811 23:23:55.651015   32156 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.468179659s)
	I0811 23:23:55.651079   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:23:55.766938   32156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0811 23:23:55.867537   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:23:55.971821   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:56.072586   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0811 23:23:56.093196   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:23:56.223201   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0811 23:23:56.306062   32156 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0811 23:23:56.306131   32156 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0811 23:23:56.311586   32156 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0811 23:23:56.311612   32156 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0811 23:23:56.311621   32156 command_runner.go:130] > Device: 16h/22d	Inode: 861         Links: 1
	I0811 23:23:56.311631   32156 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0811 23:23:56.311640   32156 command_runner.go:130] > Access: 2023-08-11 23:23:56.189456579 +0000
	I0811 23:23:56.311648   32156 command_runner.go:130] > Modify: 2023-08-11 23:23:56.189456579 +0000
	I0811 23:23:56.311660   32156 command_runner.go:130] > Change: 2023-08-11 23:23:56.192456579 +0000
	I0811 23:23:56.311665   32156 command_runner.go:130] >  Birth: -
	I0811 23:23:56.311689   32156 start.go:534] Will wait 60s for crictl version
	I0811 23:23:56.311738   32156 ssh_runner.go:195] Run: which crictl
	I0811 23:23:56.316016   32156 command_runner.go:130] > /usr/bin/crictl
	I0811 23:23:56.316082   32156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:23:56.352987   32156 command_runner.go:130] > Version:  0.1.0
	I0811 23:23:56.353012   32156 command_runner.go:130] > RuntimeName:  docker
	I0811 23:23:56.353017   32156 command_runner.go:130] > RuntimeVersion:  24.0.4
	I0811 23:23:56.353022   32156 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0811 23:23:56.354461   32156 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0811 23:23:56.354520   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:23:56.380173   32156 command_runner.go:130] > 24.0.4
	I0811 23:23:56.381466   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:23:56.408078   32156 command_runner.go:130] > 24.0.4
	I0811 23:23:56.411512   32156 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0811 23:23:56.411562   32156 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:23:56.414352   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:56.414801   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:23:56.414834   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:23:56.415056   32156 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0811 23:23:56.419160   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:23:56.431250   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:23:56.431297   32156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 23:23:56.450313   32156 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.4
	I0811 23:23:56.450330   32156 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.4
	I0811 23:23:56.450342   32156 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.4
	I0811 23:23:56.450349   32156 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.4
	I0811 23:23:56.450353   32156 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0811 23:23:56.450357   32156 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0811 23:23:56.450362   32156 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0811 23:23:56.450372   32156 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0811 23:23:56.450377   32156 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:23:56.450381   32156 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0811 23:23:56.451350   32156 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0811 23:23:56.451375   32156 docker.go:566] Images already preloaded, skipping extraction
	I0811 23:23:56.451416   32156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 23:23:56.469960   32156 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.4
	I0811 23:23:56.469975   32156 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.4
	I0811 23:23:56.469981   32156 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.4
	I0811 23:23:56.469986   32156 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.4
	I0811 23:23:56.469996   32156 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0811 23:23:56.470001   32156 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0811 23:23:56.470006   32156 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0811 23:23:56.470010   32156 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0811 23:23:56.470014   32156 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:23:56.470022   32156 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0811 23:23:56.470942   32156 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0811 23:23:56.470975   32156 cache_images.go:84] Images are preloaded, skipping loading
	I0811 23:23:56.471028   32156 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0811 23:23:56.497884   32156 command_runner.go:130] > cgroupfs
	I0811 23:23:56.498015   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:23:56.498032   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:23:56.498040   32156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:23:56.498061   32156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-618164 NodeName:multinode-618164 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:23:56.498205   32156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-618164"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:23:56.498267   32156 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-618164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:23:56.498312   32156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:23:56.508130   32156 command_runner.go:130] > kubeadm
	I0811 23:23:56.508150   32156 command_runner.go:130] > kubectl
	I0811 23:23:56.508156   32156 command_runner.go:130] > kubelet
	I0811 23:23:56.508307   32156 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:23:56.508367   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 23:23:56.517170   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0811 23:23:56.532821   32156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:23:56.548306   32156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0811 23:23:56.566152   32156 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0811 23:23:56.570221   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:23:56.582186   32156 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164 for IP: 192.168.39.6
	I0811 23:23:56.582217   32156 certs.go:190] acquiring lock for shared ca certs: {Name:mke12ed30faa4458f68c7f1069767b7834c8a1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:23:56.582354   32156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key
	I0811 23:23:56.582418   32156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key
	I0811 23:23:56.582498   32156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key
	I0811 23:23:56.582583   32156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.key.cc3bd7a5
	I0811 23:23:56.582638   32156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.key
	I0811 23:23:56.582652   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0811 23:23:56.582678   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0811 23:23:56.582699   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0811 23:23:56.582718   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0811 23:23:56.582736   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:23:56.582754   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:23:56.582772   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:23:56.582789   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:23:56.582856   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem (1338 bytes)
	W0811 23:23:56.582894   32156 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836_empty.pem, impossibly tiny 0 bytes
	I0811 23:23:56.582909   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem (1679 bytes)
	I0811 23:23:56.582947   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem (1078 bytes)
	I0811 23:23:56.582983   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:23:56.583016   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem (1675 bytes)
	I0811 23:23:56.583070   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:23:56.583127   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.583147   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem -> /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.583166   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.583678   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 23:23:56.609836   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 23:23:56.633914   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 23:23:56.659924   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0811 23:23:56.684037   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:23:56.707211   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 23:23:56.732529   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:23:56.756471   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0811 23:23:56.781148   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:23:56.805144   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem --> /usr/share/ca-certificates/16836.pem (1338 bytes)
	I0811 23:23:56.829103   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /usr/share/ca-certificates/168362.pem (1708 bytes)
	I0811 23:23:56.852875   32156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 23:23:56.870933   32156 ssh_runner.go:195] Run: openssl version
	I0811 23:23:56.876297   32156 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0811 23:23:56.876562   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:23:56.888670   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.893257   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.893511   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.893558   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:23:56.898906   32156 command_runner.go:130] > b5213941
	I0811 23:23:56.899091   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:23:56.910898   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16836.pem && ln -fs /usr/share/ca-certificates/16836.pem /etc/ssl/certs/16836.pem"
	I0811 23:23:56.922490   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.927389   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.927416   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.927458   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16836.pem
	I0811 23:23:56.933404   32156 command_runner.go:130] > 51391683
	I0811 23:23:56.933456   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16836.pem /etc/ssl/certs/51391683.0"
	I0811 23:23:56.945430   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168362.pem && ln -fs /usr/share/ca-certificates/168362.pem /etc/ssl/certs/168362.pem"
	I0811 23:23:56.957473   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.962297   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.962400   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.962441   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168362.pem
	I0811 23:23:56.967962   32156 command_runner.go:130] > 3ec20f2e
	I0811 23:23:56.968147   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168362.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:23:56.980192   32156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:23:56.984658   32156 command_runner.go:130] > ca.crt
	I0811 23:23:56.984671   32156 command_runner.go:130] > ca.key
	I0811 23:23:56.984681   32156 command_runner.go:130] > healthcheck-client.crt
	I0811 23:23:56.984685   32156 command_runner.go:130] > healthcheck-client.key
	I0811 23:23:56.984689   32156 command_runner.go:130] > peer.crt
	I0811 23:23:56.984693   32156 command_runner.go:130] > peer.key
	I0811 23:23:56.984696   32156 command_runner.go:130] > server.crt
	I0811 23:23:56.984700   32156 command_runner.go:130] > server.key
	I0811 23:23:56.985037   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0811 23:23:56.990675   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:56.990998   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0811 23:23:56.996756   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:56.997039   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0811 23:23:57.002784   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.002849   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0811 23:23:57.008397   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.008693   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0811 23:23:57.014226   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.014501   32156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0811 23:23:57.020206   32156 command_runner.go:130] > Certificate will not expire
	I0811 23:23:57.020384   32156 kubeadm.go:404] StartCluster: {Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:23:57.020523   32156 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 23:23:57.043601   32156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 23:23:57.055163   32156 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0811 23:23:57.055181   32156 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0811 23:23:57.055187   32156 command_runner.go:130] > /var/lib/minikube/etcd:
	I0811 23:23:57.055190   32156 command_runner.go:130] > member
	I0811 23:23:57.055397   32156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0811 23:23:57.055414   32156 kubeadm.go:636] restartCluster start
	I0811 23:23:57.055461   32156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0811 23:23:57.066155   32156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:57.066667   32156 kubeconfig.go:135] verify returned: extract IP: "multinode-618164" does not appear in /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:23:57.066795   32156 kubeconfig.go:146] "multinode-618164" context is missing from /home/jenkins/minikube-integration/17044-9593/kubeconfig - will repair!
	I0811 23:23:57.067123   32156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-9593/kubeconfig: {Name:mk5d0cc13acd7d86edf0e41f0198b0f7dd85af9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:23:57.067520   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:23:57.067745   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:23:57.068588   32156 cert_rotation.go:137] Starting client certificate rotation controller
	I0811 23:23:57.068768   32156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0811 23:23:57.079070   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:57.079121   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:57.092235   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:57.092251   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:57.092291   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:57.104916   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:57.605643   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:57.605826   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:57.618255   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:58.104969   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:58.105071   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:58.117713   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:58.605244   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:58.605323   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:58.617825   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:59.105371   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:59.105448   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:59.118693   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:23:59.605213   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:23:59.605293   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:23:59.617820   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:00.105361   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:00.105458   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:00.118242   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:00.605942   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:00.606023   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:00.618784   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:01.105318   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:01.105397   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:01.118306   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:01.605916   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:01.605980   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:01.618363   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:02.106046   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:02.106125   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:02.118972   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:02.605633   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:02.605720   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:02.618271   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:03.105628   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:03.105699   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:03.118269   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:03.605900   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:03.605997   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:03.618729   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:04.105270   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:04.105338   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:04.118027   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:04.605749   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:04.605833   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:04.618161   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:05.105826   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:05.105910   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:05.118139   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:05.605768   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:05.605857   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:05.617839   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:06.105373   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:06.105458   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:06.117905   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:06.605475   32156 api_server.go:166] Checking apiserver status ...
	I0811 23:24:06.605560   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:24:06.618008   32156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:24:07.079788   32156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0811 23:24:07.079816   32156 kubeadm.go:1128] stopping kube-system containers ...
	I0811 23:24:07.079870   32156 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 23:24:07.101442   32156 command_runner.go:130] > e5175209bd61
	I0811 23:24:07.101457   32156 command_runner.go:130] > 5bb51d1cc942
	I0811 23:24:07.101461   32156 command_runner.go:130] > 92137e4b2bde
	I0811 23:24:07.101465   32156 command_runner.go:130] > 5b35741c12db
	I0811 23:24:07.101469   32156 command_runner.go:130] > feef63247dc8
	I0811 23:24:07.101473   32156 command_runner.go:130] > c0158a6605ea
	I0811 23:24:07.101476   32156 command_runner.go:130] > 53769ace7d8f
	I0811 23:24:07.101480   32156 command_runner.go:130] > c453bb965128
	I0811 23:24:07.101485   32156 command_runner.go:130] > ef74cd56c60d
	I0811 23:24:07.101491   32156 command_runner.go:130] > a3429cc90df2
	I0811 23:24:07.101496   32156 command_runner.go:130] > 2965fda37c07
	I0811 23:24:07.101502   32156 command_runner.go:130] > 5f9d39ea2d1f
	I0811 23:24:07.101509   32156 command_runner.go:130] > e102c9cb8b46
	I0811 23:24:07.101515   32156 command_runner.go:130] > 208f3b4c3f22
	I0811 23:24:07.101530   32156 command_runner.go:130] > 609eb0503045
	I0811 23:24:07.101536   32156 command_runner.go:130] > 5db82ba10c90
	I0811 23:24:07.102528   32156 docker.go:462] Stopping containers: [e5175209bd61 5bb51d1cc942 92137e4b2bde 5b35741c12db feef63247dc8 c0158a6605ea 53769ace7d8f c453bb965128 ef74cd56c60d a3429cc90df2 2965fda37c07 5f9d39ea2d1f e102c9cb8b46 208f3b4c3f22 609eb0503045 5db82ba10c90]
	I0811 23:24:07.102587   32156 ssh_runner.go:195] Run: docker stop e5175209bd61 5bb51d1cc942 92137e4b2bde 5b35741c12db feef63247dc8 c0158a6605ea 53769ace7d8f c453bb965128 ef74cd56c60d a3429cc90df2 2965fda37c07 5f9d39ea2d1f e102c9cb8b46 208f3b4c3f22 609eb0503045 5db82ba10c90
	I0811 23:24:07.120025   32156 command_runner.go:130] > e5175209bd61
	I0811 23:24:07.120046   32156 command_runner.go:130] > 5bb51d1cc942
	I0811 23:24:07.120726   32156 command_runner.go:130] > 92137e4b2bde
	I0811 23:24:07.120868   32156 command_runner.go:130] > 5b35741c12db
	I0811 23:24:07.120883   32156 command_runner.go:130] > feef63247dc8
	I0811 23:24:07.121133   32156 command_runner.go:130] > c0158a6605ea
	I0811 23:24:07.121319   32156 command_runner.go:130] > 53769ace7d8f
	I0811 23:24:07.121596   32156 command_runner.go:130] > c453bb965128
	I0811 23:24:07.121776   32156 command_runner.go:130] > ef74cd56c60d
	I0811 23:24:07.123023   32156 command_runner.go:130] > a3429cc90df2
	I0811 23:24:07.123166   32156 command_runner.go:130] > 2965fda37c07
	I0811 23:24:07.123179   32156 command_runner.go:130] > 5f9d39ea2d1f
	I0811 23:24:07.123186   32156 command_runner.go:130] > e102c9cb8b46
	I0811 23:24:07.123192   32156 command_runner.go:130] > 208f3b4c3f22
	I0811 23:24:07.123198   32156 command_runner.go:130] > 609eb0503045
	I0811 23:24:07.123205   32156 command_runner.go:130] > 5db82ba10c90
	I0811 23:24:07.124644   32156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0811 23:24:07.141077   32156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 23:24:07.150449   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0811 23:24:07.150465   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0811 23:24:07.150472   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0811 23:24:07.150478   32156 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:24:07.150553   32156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:24:07.150600   32156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 23:24:07.160111   32156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0811 23:24:07.160148   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:07.276942   32156 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 23:24:07.277335   32156 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0811 23:24:07.277811   32156 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0811 23:24:07.278282   32156 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0811 23:24:07.279541   32156 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0811 23:24:07.280002   32156 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0811 23:24:07.280839   32156 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0811 23:24:07.281293   32156 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0811 23:24:07.281771   32156 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0811 23:24:07.282196   32156 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0811 23:24:07.282627   32156 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0811 23:24:07.284468   32156 command_runner.go:130] > [certs] Using the existing "sa" key
	I0811 23:24:07.284530   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.060026   32156 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 23:24:08.060052   32156 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 23:24:08.060065   32156 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 23:24:08.060074   32156 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 23:24:08.060084   32156 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 23:24:08.060113   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.130867   32156 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:24:08.133320   32156 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:24:08.133411   32156 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0811 23:24:08.254043   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.356243   32156 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 23:24:08.356264   32156 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 23:24:08.356270   32156 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 23:24:08.356277   32156 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 23:24:08.356408   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:08.432444   32156 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 23:24:08.446083   32156 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:24:08.446163   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:08.457920   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:08.973444   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:09.473768   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:09.973608   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:10.473625   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:10.523079   32156 command_runner.go:130] > 1697
	I0811 23:24:10.523142   32156 api_server.go:72] duration metric: took 2.077063631s to wait for apiserver process to appear ...
	I0811 23:24:10.523153   32156 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:24:10.523169   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:10.523707   32156 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I0811 23:24:10.523743   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:10.524067   32156 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I0811 23:24:11.024917   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:15.146509   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 23:24:15.146543   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 23:24:15.146557   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:15.162963   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 23:24:15.162989   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 23:24:15.524452   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:15.529914   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:24:15.529939   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:24:16.024527   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:16.030080   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:24:16.030104   32156 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:24:16.524699   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:16.529920   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0811 23:24:16.529982   32156 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0811 23:24:16.529987   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:16.529995   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:16.530004   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:16.543593   32156 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0811 23:24:16.543621   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:16.543632   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:16 GMT
	I0811 23:24:16.543641   32156 round_trippers.go:580]     Audit-Id: 598ce2af-61b4-4aee-b059-0721d25a0c30
	I0811 23:24:16.543649   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:16.543658   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:16.543665   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:16.543673   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:16.543696   32156 round_trippers.go:580]     Content-Length: 263
	I0811 23:24:16.543957   32156 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0811 23:24:16.544064   32156 api_server.go:141] control plane version: v1.27.4
	I0811 23:24:16.544088   32156 api_server.go:131] duration metric: took 6.020928424s to wait for apiserver health ...
	I0811 23:24:16.544099   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:24:16.544116   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:24:16.546067   32156 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 23:24:16.547723   32156 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:24:16.556631   32156 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0811 23:24:16.556655   32156 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0811 23:24:16.556665   32156 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0811 23:24:16.556686   32156 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:24:16.556698   32156 command_runner.go:130] > Access: 2023-08-11 23:23:45.638456579 +0000
	I0811 23:24:16.556707   32156 command_runner.go:130] > Modify: 2023-08-01 03:01:17.000000000 +0000
	I0811 23:24:16.556715   32156 command_runner.go:130] > Change: 2023-08-11 23:23:43.758456579 +0000
	I0811 23:24:16.556724   32156 command_runner.go:130] >  Birth: -
	I0811 23:24:16.556941   32156 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:24:16.556958   32156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:24:16.582212   32156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:24:18.035856   32156 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:24:18.035881   32156 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:24:18.035892   32156 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0811 23:24:18.035896   32156 command_runner.go:130] > daemonset.apps/kindnet configured
	I0811 23:24:18.035913   32156 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.453677621s)
	I0811 23:24:18.035931   32156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:24:18.036017   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:18.036074   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.036089   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.036095   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.040676   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:18.040699   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.040710   32156 round_trippers.go:580]     Audit-Id: 4df18645-655d-4a79-8469-4caba2b1ee9d
	I0811 23:24:18.040731   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.040745   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.040751   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.040759   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.040765   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:17 GMT
	I0811 23:24:18.041909   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"832"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84507 chars]
	I0811 23:24:18.045998   32156 system_pods.go:59] 12 kube-system pods found
	I0811 23:24:18.046031   32156 system_pods.go:61] "coredns-5d78c9869d-zrmf9" [c3c83ae1-ae12-4872-9c78-4aff9f1cefe4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0811 23:24:18.046040   32156 system_pods.go:61] "etcd-multinode-618164" [543135b3-5e52-43aa-af7c-1fea5cfb95b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0811 23:24:18.046048   32156 system_pods.go:61] "kindnet-clfqj" [b3e12c4b-402f-467b-a1f2-f7db2ae3d0ef] Running
	I0811 23:24:18.046052   32156 system_pods.go:61] "kindnet-m2c5t" [5264f13e-c667-4d82-912f-49c23eaf31cd] Running
	I0811 23:24:18.046059   32156 system_pods.go:61] "kindnet-szdxp" [d827d201-1ae4-4db8-858f-0fda601d5c40] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0811 23:24:18.046071   32156 system_pods.go:61] "kube-apiserver-multinode-618164" [a1145d9b-2c2a-42b1-bbe6-142472dc9d01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0811 23:24:18.046078   32156 system_pods.go:61] "kube-controller-manager-multinode-618164" [41f34044-7115-493f-94d8-53f69fd37242] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0811 23:24:18.046086   32156 system_pods.go:61] "kube-proxy-9ldtq" [ff783df6-3af7-44cf-bc60-843db8420efa] Running
	I0811 23:24:18.046092   32156 system_pods.go:61] "kube-proxy-glw45" [4616f16f-9566-447c-90cd-8e37c18508e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0811 23:24:18.046103   32156 system_pods.go:61] "kube-proxy-pv5p5" [08e6223f-0c5c-47bd-b37d-67f279f4d4be] Running
	I0811 23:24:18.046109   32156 system_pods.go:61] "kube-scheduler-multinode-618164" [b2a96d9a-e022-4abd-b8c6-e6ec3102773f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0811 23:24:18.046117   32156 system_pods.go:61] "storage-provisioner" [84ba55f6-4725-46ae-810f-130cbb82dd7f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0811 23:24:18.046123   32156 system_pods.go:74] duration metric: took 10.186574ms to wait for pod list to return data ...
	I0811 23:24:18.046132   32156 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:24:18.046176   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0811 23:24:18.046183   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.046190   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.046196   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.048881   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.048898   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.048908   32156 round_trippers.go:580]     Audit-Id: 1115fb47-264c-47dd-9ccc-f4657b13068b
	I0811 23:24:18.048917   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.048933   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.048943   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.048951   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.048956   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:17 GMT
	I0811 23:24:18.049382   32156 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"832"},"items":[{"metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13669 chars]
	I0811 23:24:18.050131   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:18.050152   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:18.050160   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:18.050164   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:18.050167   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:18.050170   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:18.050174   32156 node_conditions.go:105] duration metric: took 4.037902ms to run NodePressure ...
	I0811 23:24:18.050187   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:24:18.257419   32156 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0811 23:24:18.257449   32156 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0811 23:24:18.257534   32156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0811 23:24:18.257680   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0811 23:24:18.257693   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.257704   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.257714   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.260900   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.260916   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.260922   32156 round_trippers.go:580]     Audit-Id: 288a3947-3654-49b4-8986-603058e388e2
	I0811 23:24:18.260927   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.260938   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.260951   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.260960   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.260974   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.261409   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"765","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 29734 chars]
	I0811 23:24:18.262407   32156 kubeadm.go:787] kubelet initialised
	I0811 23:24:18.262423   32156 kubeadm.go:788] duration metric: took 4.87217ms waiting for restarted kubelet to initialise ...
	I0811 23:24:18.262429   32156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:18.262470   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:18.262478   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.262485   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.262491   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.268206   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:18.268224   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.268230   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.268244   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.268250   32156 round_trippers.go:580]     Audit-Id: b6772242-1278-44fe-99c3-99f4cecfcb50
	I0811 23:24:18.268256   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.268264   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.268269   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.270875   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84507 chars]
	I0811 23:24:18.273379   32156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.273462   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:18.273475   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.273486   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.273496   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.276941   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.276963   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.276974   32156 round_trippers.go:580]     Audit-Id: a5ac1403-abc1-4d4a-a0d4-e104245882e2
	I0811 23:24:18.276983   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.276992   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.277008   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.277016   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.277028   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.277829   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:18.278330   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.278344   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.278351   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.278357   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.280756   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.280772   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.280781   32156 round_trippers.go:580]     Audit-Id: d3b3c3fb-85f4-4a71-b869-560c44353ecf
	I0811 23:24:18.280791   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.280800   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.280814   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.280819   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.280824   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.280964   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.281343   32156 pod_ready.go:97] node "multinode-618164" hosting pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.281363   32156 pod_ready.go:81] duration metric: took 7.962993ms waiting for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.281370   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.281376   32156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.281421   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-618164
	I0811 23:24:18.281428   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.281434   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.281440   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.283955   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.283969   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.283975   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.283983   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.283992   32156 round_trippers.go:580]     Audit-Id: 2bc5beb2-e91a-470f-a60e-574b311bcaf5
	I0811 23:24:18.284002   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.284010   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.284026   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.284630   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"765","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6285 chars]
	I0811 23:24:18.284979   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.284990   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.284997   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.285005   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.286960   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:18.286979   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.286988   32156 round_trippers.go:580]     Audit-Id: 0b900ef5-d36c-4f31-89e0-0348ff68b814
	I0811 23:24:18.286997   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.287010   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.287019   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.287031   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.287044   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.287186   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.287464   32156 pod_ready.go:97] node "multinode-618164" hosting pod "etcd-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.287480   32156 pod_ready.go:81] duration metric: took 6.092582ms waiting for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.287488   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "etcd-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.287509   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.287587   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-618164
	I0811 23:24:18.287597   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.287607   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.287619   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.290857   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.290876   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.290885   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.290894   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.290908   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.290917   32156 round_trippers.go:580]     Audit-Id: a1066911-3b42-4ace-aea0-51ce2cd88bac
	I0811 23:24:18.290925   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.290931   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.291085   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-618164","namespace":"kube-system","uid":"a1145d9b-2c2a-42b1-bbe6-142472dc9d01","resourceVersion":"769","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.6:8443","kubernetes.io/config.hash":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.mirror":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.seen":"2023-08-11T23:20:15.427440318Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I0811 23:24:18.291573   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.291592   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.291603   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.291616   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.293435   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:18.293447   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.293453   32156 round_trippers.go:580]     Audit-Id: 79ca5249-4ff5-4112-900e-72efee7e30fb
	I0811 23:24:18.293458   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.293463   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.293468   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.293480   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.293498   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.293712   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.294048   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-apiserver-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.294068   32156 pod_ready.go:81] duration metric: took 6.520131ms waiting for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.294075   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-apiserver-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.294083   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.294134   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-618164
	I0811 23:24:18.294141   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.294148   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.294154   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.295834   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:18.295846   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.295852   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.295858   32156 round_trippers.go:580]     Audit-Id: 7c5089e7-4175-428e-ac85-0acd8a061636
	I0811 23:24:18.295863   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.295877   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.295885   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.295907   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.296220   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-618164","namespace":"kube-system","uid":"41f34044-7115-493f-94d8-53f69fd37242","resourceVersion":"770","creationTimestamp":"2023-08-11T23:20:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.mirror":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.seen":"2023-08-11T23:20:06.002920339Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I0811 23:24:18.436947   32156 request.go:628] Waited for 140.30031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.437004   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:18.437009   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.437021   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.437030   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.440103   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.440125   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.440135   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.440144   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.440153   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.440163   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.440172   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.440183   32156 round_trippers.go:580]     Audit-Id: 2651f9fb-6e9f-4069-9400-cf213560fc66
	I0811 23:24:18.440601   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:18.440908   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-controller-manager-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.440931   32156 pod_ready.go:81] duration metric: took 146.836208ms waiting for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:18.440941   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-controller-manager-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:18.440957   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.636431   32156 request.go:628] Waited for 195.407374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:24:18.636505   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:24:18.636510   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.636517   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.636524   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.640067   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:18.640085   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.640092   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.640098   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.640106   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.640115   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.640125   32156 round_trippers.go:580]     Audit-Id: 032b360e-0f94-45d6-af15-c6160aa8c3a5
	I0811 23:24:18.640134   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.640639   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9ldtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff783df6-3af7-44cf-bc60-843db8420efa","resourceVersion":"534","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0811 23:24:18.836514   32156 request.go:628] Waited for 195.424526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:24:18.836577   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:24:18.836584   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:18.836595   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:18.836610   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:18.839277   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:18.839296   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:18.839303   32156 round_trippers.go:580]     Audit-Id: a85e7486-69f1-4ee8-a5bb-7113c8d7c0ad
	I0811 23:24:18.839311   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:18.839322   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:18.839333   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:18.839343   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:18.839352   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:18.839527   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"5117de97-d432-4fe0-baad-4ef71b0a5470","resourceVersion":"599","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3267 chars]
	I0811 23:24:18.839884   32156 pod_ready.go:92] pod "kube-proxy-9ldtq" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:18.839904   32156 pod_ready.go:81] duration metric: took 398.937925ms waiting for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:18.839918   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.036269   32156 request.go:628] Waited for 196.273614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:19.036317   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:19.036327   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.036338   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.036350   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.039088   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:19.039124   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.039135   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:18 GMT
	I0811 23:24:19.039146   32156 round_trippers.go:580]     Audit-Id: 79dfe5b2-f19e-4a9b-9100-e3671b291ec3
	I0811 23:24:19.039162   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.039171   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.039183   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.039196   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.039380   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-glw45","generateName":"kube-proxy-","namespace":"kube-system","uid":"4616f16f-9566-447c-90cd-8e37c18508e3","resourceVersion":"768","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5924 chars]
	I0811 23:24:19.236148   32156 request.go:628] Waited for 196.339332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:19.236220   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:19.236238   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.236250   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.236256   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.240044   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:19.240061   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.240067   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.240073   32156 round_trippers.go:580]     Audit-Id: 789f221c-4319-45a0-935c-e22bc9b67be5
	I0811 23:24:19.240085   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.240102   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.240110   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.240122   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.240427   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:19.240725   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-proxy-glw45" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:19.240742   32156 pod_ready.go:81] duration metric: took 400.81245ms waiting for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:19.240749   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-proxy-glw45" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:19.240760   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.436165   32156 request.go:628] Waited for 195.331627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:19.436247   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:19.436257   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.436269   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.436279   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.439042   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:19.439061   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.439068   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.439074   32156 round_trippers.go:580]     Audit-Id: 2eb0809d-c710-4504-8605-d3ee1964d272
	I0811 23:24:19.439082   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.439120   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.439131   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.439138   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.439453   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pv5p5","generateName":"kube-proxy-","namespace":"kube-system","uid":"08e6223f-0c5c-47bd-b37d-67f279f4d4be","resourceVersion":"737","creationTimestamp":"2023-08-11T23:22:07Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0811 23:24:19.636165   32156 request.go:628] Waited for 196.302622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:19.636222   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:19.636229   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.636241   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.636251   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.639380   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:19.639403   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.639413   32156 round_trippers.go:580]     Audit-Id: 7c3eeb1c-5858-473c-a93b-eabca2a09765
	I0811 23:24:19.639420   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.639429   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.639442   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.639451   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.639461   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.639777   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m03","uid":"84060722-cb59-478c-9b01-7517a6ae9f59","resourceVersion":"756","creationTimestamp":"2023-08-11T23:22:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0811 23:24:19.640005   32156 pod_ready.go:92] pod "kube-proxy-pv5p5" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:19.640024   32156 pod_ready.go:81] duration metric: took 399.251193ms waiting for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.640032   32156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:19.836296   32156 request.go:628] Waited for 196.176722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:19.836345   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:19.836350   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:19.836357   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:19.836363   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:19.839606   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:19.839628   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:19.839638   32156 round_trippers.go:580]     Audit-Id: 31b363d3-c633-4e2f-92bd-7a466addde38
	I0811 23:24:19.839647   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:19.839655   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:19.839664   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:19.839670   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:19.839675   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:19.839905   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-618164","namespace":"kube-system","uid":"b2a96d9a-e022-4abd-b8c6-e6ec3102773f","resourceVersion":"764","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.mirror":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.seen":"2023-08-11T23:20:15.427437689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5144 chars]
	I0811 23:24:20.036703   32156 request.go:628] Waited for 196.353181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.036768   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.036773   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.036781   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.036788   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.039687   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:20.039710   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.039720   32156 round_trippers.go:580]     Audit-Id: 707357c6-9627-4117-b85d-0cae27545e67
	I0811 23:24:20.039727   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.039735   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.039746   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.039755   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.039777   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:19 GMT
	I0811 23:24:20.039957   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:20.040361   32156 pod_ready.go:97] node "multinode-618164" hosting pod "kube-scheduler-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:20.040379   32156 pod_ready.go:81] duration metric: took 400.341096ms waiting for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	E0811 23:24:20.040390   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164" hosting pod "kube-scheduler-multinode-618164" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164" has status "Ready":"False"
	I0811 23:24:20.040398   32156 pod_ready.go:38] duration metric: took 1.77796235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:20.040416   32156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:24:20.051216   32156 command_runner.go:130] > -16
	I0811 23:24:20.051419   32156 ops.go:34] apiserver oom_adj: -16
	I0811 23:24:20.051435   32156 kubeadm.go:640] restartCluster took 22.996014062s
	I0811 23:24:20.051445   32156 kubeadm.go:406] StartCluster complete in 23.031064441s
	I0811 23:24:20.051465   32156 settings.go:142] acquiring lock: {Name:mkdad93b07c8b1c16ba23107571d2c5baafb252d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:20.051564   32156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:24:20.052285   32156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-9593/kubeconfig: {Name:mk5d0cc13acd7d86edf0e41f0198b0f7dd85af9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:20.052541   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:24:20.052672   32156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:24:20.055298   32156 out.go:177] * Enabled addons: 
	I0811 23:24:20.052854   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:24:20.052880   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:24:20.056871   32156 addons.go:502] enable addons completed in 4.189089ms: enabled=[]
	I0811 23:24:20.057059   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:24:20.057318   32156 round_trippers.go:463] GET https://192.168.39.6:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:24:20.057329   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.057336   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.057342   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.060017   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:20.060032   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.060039   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.060044   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.060049   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.060056   32156 round_trippers.go:580]     Content-Length: 291
	I0811 23:24:20.060061   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.060067   32156 round_trippers.go:580]     Audit-Id: a83b0b99-a1a4-4098-871a-02d028f721ef
	I0811 23:24:20.060075   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.060091   32156 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31aef6c0-c84e-4384-9e6e-68f0c22e59ba","resourceVersion":"833","creationTimestamp":"2023-08-11T23:20:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 23:24:20.060220   32156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-618164" context rescaled to 1 replicas
	I0811 23:24:20.060243   32156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0811 23:24:20.061985   32156 out.go:177] * Verifying Kubernetes components...
	I0811 23:24:20.063494   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:24:20.145035   32156 command_runner.go:130] > apiVersion: v1
	I0811 23:24:20.145057   32156 command_runner.go:130] > data:
	I0811 23:24:20.145061   32156 command_runner.go:130] >   Corefile: |
	I0811 23:24:20.145065   32156 command_runner.go:130] >     .:53 {
	I0811 23:24:20.145069   32156 command_runner.go:130] >         log
	I0811 23:24:20.145074   32156 command_runner.go:130] >         errors
	I0811 23:24:20.145077   32156 command_runner.go:130] >         health {
	I0811 23:24:20.145082   32156 command_runner.go:130] >            lameduck 5s
	I0811 23:24:20.145085   32156 command_runner.go:130] >         }
	I0811 23:24:20.145089   32156 command_runner.go:130] >         ready
	I0811 23:24:20.145094   32156 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0811 23:24:20.145098   32156 command_runner.go:130] >            pods insecure
	I0811 23:24:20.145104   32156 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0811 23:24:20.145108   32156 command_runner.go:130] >            ttl 30
	I0811 23:24:20.145111   32156 command_runner.go:130] >         }
	I0811 23:24:20.145119   32156 command_runner.go:130] >         prometheus :9153
	I0811 23:24:20.145122   32156 command_runner.go:130] >         hosts {
	I0811 23:24:20.145127   32156 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0811 23:24:20.145131   32156 command_runner.go:130] >            fallthrough
	I0811 23:24:20.145136   32156 command_runner.go:130] >         }
	I0811 23:24:20.145144   32156 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0811 23:24:20.145151   32156 command_runner.go:130] >            max_concurrent 1000
	I0811 23:24:20.145156   32156 command_runner.go:130] >         }
	I0811 23:24:20.145163   32156 command_runner.go:130] >         cache 30
	I0811 23:24:20.145170   32156 command_runner.go:130] >         loop
	I0811 23:24:20.145181   32156 command_runner.go:130] >         reload
	I0811 23:24:20.145186   32156 command_runner.go:130] >         loadbalance
	I0811 23:24:20.145190   32156 command_runner.go:130] >     }
	I0811 23:24:20.145199   32156 command_runner.go:130] > kind: ConfigMap
	I0811 23:24:20.145203   32156 command_runner.go:130] > metadata:
	I0811 23:24:20.145208   32156 command_runner.go:130] >   creationTimestamp: "2023-08-11T23:20:15Z"
	I0811 23:24:20.145213   32156 command_runner.go:130] >   name: coredns
	I0811 23:24:20.145217   32156 command_runner.go:130] >   namespace: kube-system
	I0811 23:24:20.145223   32156 command_runner.go:130] >   resourceVersion: "413"
	I0811 23:24:20.145228   32156 command_runner.go:130] >   uid: e0a1f713-20c0-4280-a782-fa6099258ac8
	I0811 23:24:20.147599   32156 node_ready.go:35] waiting up to 6m0s for node "multinode-618164" to be "Ready" ...
	I0811 23:24:20.147816   32156 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0811 23:24:20.236926   32156 request.go:628] Waited for 89.227881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.236974   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.236979   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.236986   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.236993   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.239598   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:20.239623   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.239633   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.239642   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.239651   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.239659   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.239668   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.239681   32156 round_trippers.go:580]     Audit-Id: bd732ff7-ef51-4182-bdb8-dc8d4ee266e2
	I0811 23:24:20.239836   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:20.436686   32156 request.go:628] Waited for 196.404221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.436745   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.436751   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.436759   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.436767   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.439787   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:20.439829   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.439843   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.439855   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.439864   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.439875   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.439888   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.439900   32156 round_trippers.go:580]     Audit-Id: bfae5fd6-0f94-45e5-b774-b048e32b1889
	I0811 23:24:20.440000   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:20.941178   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:20.941201   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:20.941208   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:20.941224   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:20.944698   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:20.944727   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:20.944738   32156 round_trippers.go:580]     Audit-Id: 4d39ad7e-f72c-4a38-9451-ece4ac72751e
	I0811 23:24:20.944747   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:20.944763   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:20.944772   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:20.944784   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:20.944793   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:20 GMT
	I0811 23:24:20.944928   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:21.441558   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:21.441583   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:21.441595   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:21.441607   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:21.444757   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:21.444783   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:21.444793   32156 round_trippers.go:580]     Audit-Id: 44b98dc6-91a0-487b-86fb-0835dca1c6b4
	I0811 23:24:21.444802   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:21.444826   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:21.444835   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:21.444847   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:21.444859   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:21 GMT
	I0811 23:24:21.444980   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:21.940575   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:21.940609   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:21.940617   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:21.940623   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:21.943896   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:21.943933   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:21.943943   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:21.943949   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:21.943955   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:21.943960   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:21 GMT
	I0811 23:24:21.943966   32156 round_trippers.go:580]     Audit-Id: fc5b0e1f-9211-4a46-8524-8219d022c1af
	I0811 23:24:21.943971   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:21.944109   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:22.440655   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:22.440679   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:22.440688   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:22.440698   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:22.443876   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:22.443898   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:22.443905   32156 round_trippers.go:580]     Audit-Id: e532448d-981d-4f23-805b-c68ac2a9a08f
	I0811 23:24:22.443911   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:22.443917   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:22.443922   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:22.443928   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:22.443933   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:22 GMT
	I0811 23:24:22.444267   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:22.444795   32156 node_ready.go:58] node "multinode-618164" has status "Ready":"False"
	I0811 23:24:22.941424   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:22.941444   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:22.941453   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:22.941459   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:22.944289   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:22.944313   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:22.944323   32156 round_trippers.go:580]     Audit-Id: 08cc924f-7ce0-4b24-85df-f5a51a3e2025
	I0811 23:24:22.944332   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:22.944341   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:22.944357   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:22.944375   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:22.944383   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:22 GMT
	I0811 23:24:22.944797   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:23.441459   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:23.441477   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:23.441486   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:23.441493   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:23.444128   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:23.444150   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:23.444160   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:23 GMT
	I0811 23:24:23.444168   32156 round_trippers.go:580]     Audit-Id: 844f5b54-e7ae-4bea-83cd-d78e30dd0397
	I0811 23:24:23.444176   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:23.444188   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:23.444205   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:23.444220   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:23.444765   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:23.941480   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:23.941503   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:23.941511   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:23.941517   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:23.944442   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:23.944459   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:23.944466   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:23.944471   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:23.944477   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:23.944485   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:23 GMT
	I0811 23:24:23.944494   32156 round_trippers.go:580]     Audit-Id: 08a80851-b8d5-4b93-b866-6ee39106a699
	I0811 23:24:23.944502   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:23.945447   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:24.441214   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:24.441236   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:24.441244   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:24.441250   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:24.444325   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:24.444351   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:24.444362   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:24.444372   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:24.444386   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:24 GMT
	I0811 23:24:24.444395   32156 round_trippers.go:580]     Audit-Id: 1ab7d34d-acba-42ef-b792-3a794c320756
	I0811 23:24:24.444405   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:24.444413   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:24.444620   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:24.444908   32156 node_ready.go:58] node "multinode-618164" has status "Ready":"False"
	I0811 23:24:24.941357   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:24.941382   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:24.941395   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:24.941405   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:24.944213   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:24.944231   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:24.944238   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:24.944244   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:24.944249   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:24.944254   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:24.944259   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:24 GMT
	I0811 23:24:24.944264   32156 round_trippers.go:580]     Audit-Id: 5fbafed4-32b9-4e2b-9c78-ad816b8fc27e
	I0811 23:24:24.944811   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"763","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5282 chars]
	I0811 23:24:25.441509   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.441547   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.441560   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.441570   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.444341   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.444365   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.444375   32156 round_trippers.go:580]     Audit-Id: 6e6eddea-aee5-42ea-895b-99ad1a0d559a
	I0811 23:24:25.444385   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.444393   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.444405   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.444415   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.444427   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.445015   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:25.445314   32156 node_ready.go:49] node "multinode-618164" has status "Ready":"True"
	I0811 23:24:25.445327   32156 node_ready.go:38] duration metric: took 5.2977013s waiting for node "multinode-618164" to be "Ready" ...
	I0811 23:24:25.445334   32156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:25.445379   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:25.445387   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.445393   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.445399   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.452024   32156 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0811 23:24:25.452049   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.452058   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.452067   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.452075   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.452084   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.452092   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.452118   32156 round_trippers.go:580]     Audit-Id: 82e466c8-8aed-4196-bb8c-bc86da79a214
	I0811 23:24:25.453659   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"854"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83655 chars]
	I0811 23:24:25.456189   32156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:25.456260   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:25.456269   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.456276   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.456282   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.458591   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.458605   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.458611   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.458617   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.458625   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.458634   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.458652   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.458664   32156 round_trippers.go:580]     Audit-Id: 445efbee-ecd6-473e-adb2-4d52dc200b71
	I0811 23:24:25.458879   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:25.459390   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.459406   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.459414   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.459420   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.461583   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.461596   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.461603   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.461614   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.461623   32156 round_trippers.go:580]     Audit-Id: e5ef9b96-a743-4573-af95-f8506478ec65
	I0811 23:24:25.461638   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.461646   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.461654   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.461797   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:25.462204   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:25.462221   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.462231   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.462240   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.464104   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:25.464116   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.464122   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.464127   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.464132   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.464139   32156 round_trippers.go:580]     Audit-Id: dcb642da-9e5b-483a-9041-800cd982e1ff
	I0811 23:24:25.464149   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.464159   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.464315   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:25.464753   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.464766   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.464773   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.464779   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.466613   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:25.466624   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.466629   32156 round_trippers.go:580]     Audit-Id: ce0caab4-a354-4b34-94c5-2db6f3d60119
	I0811 23:24:25.466635   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.466640   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.466645   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.466651   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.466656   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.466939   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:25.968030   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:25.968056   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.968069   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.968080   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.971619   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:25.971636   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.971643   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.971648   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.971653   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.971659   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.971665   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.971670   32156 round_trippers.go:580]     Audit-Id: 8fcb4987-acec-4727-b923-e632bfd490f1
	I0811 23:24:25.972079   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:25.972606   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:25.972621   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:25.972629   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:25.972635   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:25.975228   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:25.975242   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:25.975257   32156 round_trippers.go:580]     Audit-Id: d3f41a02-cff5-4e62-9ee7-86bb23f78203
	I0811 23:24:25.975266   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:25.975277   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:25.975290   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:25.975299   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:25.975308   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:25 GMT
	I0811 23:24:25.975487   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:26.468217   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:26.468240   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.468249   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.468255   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.471094   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:26.471129   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.471140   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.471150   32156 round_trippers.go:580]     Audit-Id: 74af3dc7-7cdc-455d-8af5-ac368043c3df
	I0811 23:24:26.471157   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.471162   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.471168   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.471173   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.471267   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:26.471810   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:26.471827   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.471916   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.471945   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.474066   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:26.474087   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.474096   32156 round_trippers.go:580]     Audit-Id: e0fee66f-d6c0-4902-8c74-e56c0be1588a
	I0811 23:24:26.474106   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.474115   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.474124   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.474132   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.474142   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.474302   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:26.967963   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:26.967993   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.968001   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.968008   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.971201   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:26.971221   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.971232   32156 round_trippers.go:580]     Audit-Id: 959f34ad-81ce-44b3-8e51-0c9e243c77f1
	I0811 23:24:26.971240   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.971248   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.971257   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.971268   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.971278   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.971462   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:26.971902   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:26.971917   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:26.971928   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:26.971938   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:26.974142   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:26.974157   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:26.974167   32156 round_trippers.go:580]     Audit-Id: 3f13bbb4-ab95-414f-b33c-7be2638a258a
	I0811 23:24:26.974175   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:26.974184   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:26.974193   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:26.974204   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:26.974215   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:26 GMT
	I0811 23:24:26.974372   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:27.468239   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:27.468261   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.468271   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.468281   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.471150   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:27.471167   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.471177   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.471189   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.471199   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.471210   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.471224   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.471234   32156 round_trippers.go:580]     Audit-Id: 6d98046d-8925-4643-9ee3-138901d7afdb
	I0811 23:24:27.471416   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:27.471910   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:27.471924   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.471932   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.471938   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.474501   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:27.474515   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.474524   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.474534   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.474543   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.474552   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.474570   32156 round_trippers.go:580]     Audit-Id: 9cbe4b75-dcc6-4f58-ba2f-34b7a1a2ae2a
	I0811 23:24:27.474579   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.474951   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:27.475250   32156 pod_ready.go:102] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"False"
	I0811 23:24:27.967698   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:27.967718   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.967728   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.967736   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.972072   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:27.972092   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.972102   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.972110   32156 round_trippers.go:580]     Audit-Id: 87ecb69a-34dd-45e3-bd25-8f383943fed6
	I0811 23:24:27.972117   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.972125   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.972133   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.972145   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.973050   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:27.973501   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:27.973517   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:27.973527   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:27.973537   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:27.975479   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:27.975495   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:27.975505   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:27.975514   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:27.975523   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:27.975533   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:27 GMT
	I0811 23:24:27.975543   32156 round_trippers.go:580]     Audit-Id: 1fc8299b-aa16-4865-a90e-3fcb5c8af967
	I0811 23:24:27.975558   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:27.975656   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:28.468325   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:28.468347   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.468355   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.468361   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.471902   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:28.471919   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.471926   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.471932   32156 round_trippers.go:580]     Audit-Id: 9548ec6b-6248-4c27-8c8d-925526fdd392
	I0811 23:24:28.471937   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.471942   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.471951   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.471960   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.472051   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:28.472591   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:28.472608   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.472619   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.472628   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.474945   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:28.474960   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.474967   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.474975   32156 round_trippers.go:580]     Audit-Id: bfb07ccf-8b76-446d-bec7-40e616635bc9
	I0811 23:24:28.474984   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.474995   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.475007   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.475019   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.475145   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:28.967726   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:28.967751   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.967760   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.967770   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.971148   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:28.971167   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.971176   32156 round_trippers.go:580]     Audit-Id: 0e3ff461-0ca9-48c8-9d94-b83189531448
	I0811 23:24:28.971195   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.971205   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.971216   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.971229   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.971240   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.971345   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:28.971806   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:28.971819   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:28.971826   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:28.971832   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:28.974333   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:28.974350   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:28.974359   32156 round_trippers.go:580]     Audit-Id: 185f62d2-498b-4554-bea9-486df9494c75
	I0811 23:24:28.974370   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:28.974379   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:28.974388   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:28.974397   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:28.974407   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:28 GMT
	I0811 23:24:28.974521   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:29.468235   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:29.468258   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.468269   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.468278   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.470941   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.470960   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.470969   32156 round_trippers.go:580]     Audit-Id: b572a3bd-c4d9-4159-9087-e708e5ed6c6b
	I0811 23:24:29.470977   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.470985   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.470993   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.471001   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.471011   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.471290   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:29.471705   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:29.471725   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.471732   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.471738   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.473815   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.473830   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.473839   32156 round_trippers.go:580]     Audit-Id: 0e5fb9bb-9507-4244-8f5f-0631e1f00524
	I0811 23:24:29.473847   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.473855   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.473864   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.473874   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.473884   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.474042   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:29.967622   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:29.967656   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.967664   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.967670   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.970608   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.970623   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.970629   32156 round_trippers.go:580]     Audit-Id: 82c7acea-b9cb-440e-9e7e-fb32945e6cce
	I0811 23:24:29.970635   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.970643   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.970652   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.970662   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.970672   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.970998   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:29.971596   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:29.971616   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:29.971628   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:29.971641   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:29.974243   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:29.974261   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:29.974271   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:29.974280   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:29 GMT
	I0811 23:24:29.974293   32156 round_trippers.go:580]     Audit-Id: ee208613-2a56-4f9b-abbd-a640827d3198
	I0811 23:24:29.974302   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:29.974310   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:29.974316   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:29.974467   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:29.974930   32156 pod_ready.go:102] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"False"
	I0811 23:24:30.468073   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:30.468099   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.468110   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.468120   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.470897   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:30.470916   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.470929   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.470941   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.470956   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.470964   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.470976   32156 round_trippers.go:580]     Audit-Id: 82f699de-0a54-481c-bb12-f87a0daa84e9
	I0811 23:24:30.470986   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.471134   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:30.471737   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:30.471751   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.471758   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.471767   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.475683   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:30.475698   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.475704   32156 round_trippers.go:580]     Audit-Id: ac6a86d1-75a4-46ae-807f-7ebfd31289bc
	I0811 23:24:30.475710   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.475720   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.475728   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.475738   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.475746   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.475902   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:30.967564   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:30.967586   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.967594   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.967601   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.972202   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:30.972221   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.972229   32156 round_trippers.go:580]     Audit-Id: e7a2e0c8-d59f-43ca-85fa-026ce0fe0d76
	I0811 23:24:30.972236   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.972247   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.972258   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.972267   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.972280   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.972499   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:30.973091   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:30.973113   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:30.973124   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:30.973139   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:30.977456   32156 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:24:30.977471   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:30.977477   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:30.977485   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:30.977497   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:30.977506   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:30.977518   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:30 GMT
	I0811 23:24:30.977528   32156 round_trippers.go:580]     Audit-Id: 381e4024-f9fe-4403-9375-88a00358975b
	I0811 23:24:30.977682   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:31.468315   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:31.468335   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.468345   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.468352   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.471062   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:31.471083   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.471094   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.471130   32156 round_trippers.go:580]     Audit-Id: fa12f66b-5889-4686-8116-11fe87af94c0
	I0811 23:24:31.471144   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.471152   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.471160   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.471167   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.471409   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:31.471875   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:31.471889   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.471896   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.471906   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.473962   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:31.473981   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.473991   32156 round_trippers.go:580]     Audit-Id: f7f3b369-6798-4044-b5ee-de737997014c
	I0811 23:24:31.473999   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.474008   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.474020   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.474032   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.474044   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.474263   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:31.967951   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:31.967972   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.967980   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.967986   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.971967   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:31.971990   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.972001   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.972008   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.972016   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.972024   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.972033   32156 round_trippers.go:580]     Audit-Id: 9fd3cb53-2603-40b0-bd50-00a987b1e227
	I0811 23:24:31.972042   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.972171   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:31.972605   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:31.972619   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:31.972629   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:31.972637   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:31.974757   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:31.974771   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:31.974780   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:31.974789   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:31.974799   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:31.974815   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:31.974823   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:31 GMT
	I0811 23:24:31.974833   32156 round_trippers.go:580]     Audit-Id: 85d0f6ae-272d-4dc2-a561-40c101a5161a
	I0811 23:24:31.975024   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:31.975405   32156 pod_ready.go:102] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"False"
	I0811 23:24:32.468362   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:32.468382   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.468393   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.468402   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.473825   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:32.473845   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.473855   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.473863   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.473872   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.473881   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.473892   32156 round_trippers.go:580]     Audit-Id: c552d16d-a3b4-4d66-a376-eefd1c10eb1e
	I0811 23:24:32.473902   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.474014   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"767","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
	I0811 23:24:32.474477   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:32.474490   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.474501   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.474510   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.477314   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:32.477331   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.477340   32156 round_trippers.go:580]     Audit-Id: 4e9ec59d-c226-4b0b-a98b-dba59305efde
	I0811 23:24:32.477348   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.477356   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.477365   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.477377   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.477387   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.477551   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:32.967630   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:32.967651   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.967659   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.967665   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.970961   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:32.970980   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.970990   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.971000   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.971013   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.971026   32156 round_trippers.go:580]     Audit-Id: 224adbd2-8bc3-4670-a026-066c23e93164
	I0811 23:24:32.971038   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.971048   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.971326   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"878","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0811 23:24:32.971757   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:32.971769   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:32.971776   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:32.971782   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:32.974205   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:32.974220   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:32.974229   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:32.974238   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:32 GMT
	I0811 23:24:32.974248   32156 round_trippers.go:580]     Audit-Id: 0d872ae4-4e9b-4635-9e65-ba775b7a8de7
	I0811 23:24:32.974259   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:32.974268   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:32.974281   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:32.974584   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.468329   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:33.468352   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.468363   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.468371   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.472124   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:33.472147   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.472154   32156 round_trippers.go:580]     Audit-Id: 2ae31ed4-b404-43c6-aa1e-25c8cc2fb9f7
	I0811 23:24:33.472160   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.472166   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.472171   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.472177   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.472183   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.472683   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"878","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
	I0811 23:24:33.473126   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.473138   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.473145   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.473151   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.475438   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.475460   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.475470   32156 round_trippers.go:580]     Audit-Id: e66ada46-5f8a-42d5-bcd0-9776162a1903
	I0811 23:24:33.475479   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.475493   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.475502   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.475511   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.475517   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.475604   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.968224   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:24:33.968246   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.968255   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.968261   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.971186   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.971206   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.971219   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.971227   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.971235   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.971246   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.971254   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.971266   32156 round_trippers.go:580]     Audit-Id: 89391019-02eb-4a7a-97b0-c7942170203a
	I0811 23:24:33.971602   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0811 23:24:33.972003   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.972014   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.972021   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.972027   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.974433   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.974452   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.974460   32156 round_trippers.go:580]     Audit-Id: fab66579-ff2f-4e3c-ace4-bb6c130e597c
	I0811 23:24:33.974466   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.974471   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.974480   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.974488   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.974498   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.975061   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.975340   32156 pod_ready.go:92] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.975355   32156 pod_ready.go:81] duration metric: took 8.519145326s waiting for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.975363   32156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.975402   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-618164
	I0811 23:24:33.975409   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.975416   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.975422   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.977484   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.977500   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.977507   32156 round_trippers.go:580]     Audit-Id: 6eac740e-8b4f-45e4-a18e-1f84084abe24
	I0811 23:24:33.977512   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.977517   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.977526   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.977531   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.977537   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.977667   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"868","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I0811 23:24:33.977982   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.977992   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.977998   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.978006   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.980986   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.981000   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.981006   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.981011   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.981016   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.981025   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.981041   32156 round_trippers.go:580]     Audit-Id: 08f0ed9b-c430-4199-9103-44ca5d887cec
	I0811 23:24:33.981050   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.981179   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.981412   32156 pod_ready.go:92] pod "etcd-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.981423   32156 pod_ready.go:81] duration metric: took 6.055093ms waiting for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.981438   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.981483   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-618164
	I0811 23:24:33.981490   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.981496   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.981502   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.983575   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.983593   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.983600   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.983608   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.983613   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.983621   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.983627   32156 round_trippers.go:580]     Audit-Id: 2fc96f46-941e-43f2-be3d-0a8a75940bcc
	I0811 23:24:33.983634   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.983776   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-618164","namespace":"kube-system","uid":"a1145d9b-2c2a-42b1-bbe6-142472dc9d01","resourceVersion":"870","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.6:8443","kubernetes.io/config.hash":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.mirror":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.seen":"2023-08-11T23:20:15.427440318Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7597 chars]
	I0811 23:24:33.984096   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.984106   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.984112   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.984118   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.985746   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:33.985762   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.985768   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.985774   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.985782   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.985788   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.985796   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.985801   32156 round_trippers.go:580]     Audit-Id: 648f59a3-f4c1-456b-bc5b-9e6c40876052
	I0811 23:24:33.985939   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.986168   32156 pod_ready.go:92] pod "kube-apiserver-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.986178   32156 pod_ready.go:81] duration metric: took 4.731192ms waiting for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.986186   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.986220   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-618164
	I0811 23:24:33.986227   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.986234   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.986240   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.988258   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.988273   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.988280   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.988286   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.988293   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.988299   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.988312   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.988322   32156 round_trippers.go:580]     Audit-Id: 5b6aa986-f47f-4a3f-84d3-e0186ec0151d
	I0811 23:24:33.988838   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-618164","namespace":"kube-system","uid":"41f34044-7115-493f-94d8-53f69fd37242","resourceVersion":"848","creationTimestamp":"2023-08-11T23:20:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.mirror":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.seen":"2023-08-11T23:20:06.002920339Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7170 chars]
	I0811 23:24:33.989165   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:33.989175   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.989182   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.989188   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.990811   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:33.990824   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.990833   32156 round_trippers.go:580]     Audit-Id: 19482bca-52fd-4f68-b367-dd9b5777c7e5
	I0811 23:24:33.990841   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.990847   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.990853   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.990859   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.990869   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.991010   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:33.991294   32156 pod_ready.go:92] pod "kube-controller-manager-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.991308   32156 pod_ready.go:81] duration metric: took 5.116437ms waiting for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.991315   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.991359   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:24:33.991373   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.991382   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.991392   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.993626   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:33.993640   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.993651   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.993660   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.993669   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.993675   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.993685   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.993690   32156 round_trippers.go:580]     Audit-Id: 8deac8a8-7fc0-4662-9a6a-98a6486d95b7
	I0811 23:24:33.993919   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9ldtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff783df6-3af7-44cf-bc60-843db8420efa","resourceVersion":"534","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0811 23:24:33.994228   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:24:33.994239   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:33.994247   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:33.994253   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:33.995880   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:24:33.995892   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:33.995898   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:33.995903   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:33.995909   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:33.995914   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:33.995920   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:33 GMT
	I0811 23:24:33.995925   32156 round_trippers.go:580]     Audit-Id: 87b8b550-3255-4a93-b277-cc6dd7ee6bc1
	I0811 23:24:33.996105   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"5117de97-d432-4fe0-baad-4ef71b0a5470","resourceVersion":"599","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3267 chars]
	I0811 23:24:33.996285   32156 pod_ready.go:92] pod "kube-proxy-9ldtq" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:33.996295   32156 pod_ready.go:81] duration metric: took 4.975043ms waiting for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:33.996302   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.168683   32156 request.go:628] Waited for 172.32863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:34.168758   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:24:34.168764   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.168773   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.168782   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.172087   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:34.172105   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.172111   32156 round_trippers.go:580]     Audit-Id: b9389cec-75af-4f94-8a9d-7240b0bfd7f6
	I0811 23:24:34.172117   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.172126   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.172132   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.172140   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.172145   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.172410   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-glw45","generateName":"kube-proxy-","namespace":"kube-system","uid":"4616f16f-9566-447c-90cd-8e37c18508e3","resourceVersion":"843","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I0811 23:24:34.369108   32156 request.go:628] Waited for 196.33367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:34.369196   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:34.369204   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.369216   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.369234   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.372658   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:34.372675   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.372682   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.372688   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.372693   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.372699   32156 round_trippers.go:580]     Audit-Id: 82e96bbf-57a2-484f-bf9d-2381f69a4c81
	I0811 23:24:34.372706   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.372719   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.372919   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:34.373206   32156 pod_ready.go:92] pod "kube-proxy-glw45" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:34.373221   32156 pod_ready.go:81] duration metric: took 376.904763ms waiting for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.373234   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.568660   32156 request.go:628] Waited for 195.365222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:34.568733   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:24:34.568741   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.568749   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.568755   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.571454   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:34.571477   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.571487   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.571495   32156 round_trippers.go:580]     Audit-Id: 8d618cf0-88d2-47c6-9ef6-7b5170fa9cd2
	I0811 23:24:34.571503   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.571511   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.571522   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.571533   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.571863   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pv5p5","generateName":"kube-proxy-","namespace":"kube-system","uid":"08e6223f-0c5c-47bd-b37d-67f279f4d4be","resourceVersion":"737","creationTimestamp":"2023-08-11T23:22:07Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0811 23:24:34.768622   32156 request.go:628] Waited for 196.348003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:34.768682   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:24:34.768701   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.768711   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.768721   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.771375   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:34.771392   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.771399   32156 round_trippers.go:580]     Audit-Id: 3fbb8d82-2b28-4e58-8ae8-bacb17cfc2f9
	I0811 23:24:34.771405   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.771410   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.771415   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.771421   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.771426   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.771671   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m03","uid":"84060722-cb59-478c-9b01-7517a6ae9f59","resourceVersion":"756","creationTimestamp":"2023-08-11T23:22:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I0811 23:24:34.771907   32156 pod_ready.go:92] pod "kube-proxy-pv5p5" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:34.771918   32156 pod_ready.go:81] duration metric: took 398.678555ms waiting for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.771927   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:34.968272   32156 request.go:628] Waited for 196.292497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:34.968344   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:24:34.968350   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:34.968360   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:34.968375   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:34.972172   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:34.972191   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:34.972197   32156 round_trippers.go:580]     Audit-Id: d0684e37-d5f1-424a-8a17-9bb10a0e3328
	I0811 23:24:34.972203   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:34.972208   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:34.972213   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:34.972219   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:34.972224   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:34 GMT
	I0811 23:24:34.972362   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-618164","namespace":"kube-system","uid":"b2a96d9a-e022-4abd-b8c6-e6ec3102773f","resourceVersion":"871","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.mirror":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.seen":"2023-08-11T23:20:15.427437689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I0811 23:24:35.169110   32156 request.go:628] Waited for 196.363918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:35.169155   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:24:35.169159   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.169166   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.169172   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.171710   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:35.171731   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.171744   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.171756   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.171765   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.171774   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.171787   32156 round_trippers.go:580]     Audit-Id: d364fb7f-6e32-49c1-9e80-a4d60178f479
	I0811 23:24:35.171801   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.172414   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:24:35.172719   32156 pod_ready.go:92] pod "kube-scheduler-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:24:35.172735   32156 pod_ready.go:81] duration metric: took 400.801391ms waiting for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:24:35.172747   32156 pod_ready.go:38] duration metric: took 9.727404873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:24:35.172770   32156 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:24:35.172828   32156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:24:35.185884   32156 command_runner.go:130] > 1697
	I0811 23:24:35.186187   32156 api_server.go:72] duration metric: took 15.125922974s to wait for apiserver process to appear ...
	I0811 23:24:35.186204   32156 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:24:35.186221   32156 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:24:35.192470   32156 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0811 23:24:35.192520   32156 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0811 23:24:35.192525   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.192534   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.192541   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.193372   32156 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0811 23:24:35.193388   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.193397   32156 round_trippers.go:580]     Content-Length: 263
	I0811 23:24:35.193406   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.193414   32156 round_trippers.go:580]     Audit-Id: ca9c7d49-11cf-466b-973a-b094139ea178
	I0811 23:24:35.193422   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.193434   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.193444   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.193454   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.193473   32156 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0811 23:24:35.193520   32156 api_server.go:141] control plane version: v1.27.4
	I0811 23:24:35.193534   32156 api_server.go:131] duration metric: took 7.324354ms to wait for apiserver health ...
	I0811 23:24:35.193542   32156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:24:35.368931   32156 request.go:628] Waited for 175.311269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.368990   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.368995   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.369003   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.369010   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.374076   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:35.374102   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.374112   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.374120   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.374127   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.374135   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.374143   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.374155   32156 round_trippers.go:580]     Audit-Id: f23f6832-71ab-429b-86cd-18cc8e984ed8
	I0811 23:24:35.375597   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82891 chars]
	I0811 23:24:35.378074   32156 system_pods.go:59] 12 kube-system pods found
	I0811 23:24:35.378097   32156 system_pods.go:61] "coredns-5d78c9869d-zrmf9" [c3c83ae1-ae12-4872-9c78-4aff9f1cefe4] Running
	I0811 23:24:35.378104   32156 system_pods.go:61] "etcd-multinode-618164" [543135b3-5e52-43aa-af7c-1fea5cfb95b6] Running
	I0811 23:24:35.378113   32156 system_pods.go:61] "kindnet-clfqj" [b3e12c4b-402f-467b-a1f2-f7db2ae3d0ef] Running
	I0811 23:24:35.378118   32156 system_pods.go:61] "kindnet-m2c5t" [5264f13e-c667-4d82-912f-49c23eaf31cd] Running
	I0811 23:24:35.378124   32156 system_pods.go:61] "kindnet-szdxp" [d827d201-1ae4-4db8-858f-0fda601d5c40] Running
	I0811 23:24:35.378130   32156 system_pods.go:61] "kube-apiserver-multinode-618164" [a1145d9b-2c2a-42b1-bbe6-142472dc9d01] Running
	I0811 23:24:35.378137   32156 system_pods.go:61] "kube-controller-manager-multinode-618164" [41f34044-7115-493f-94d8-53f69fd37242] Running
	I0811 23:24:35.378148   32156 system_pods.go:61] "kube-proxy-9ldtq" [ff783df6-3af7-44cf-bc60-843db8420efa] Running
	I0811 23:24:35.378155   32156 system_pods.go:61] "kube-proxy-glw45" [4616f16f-9566-447c-90cd-8e37c18508e3] Running
	I0811 23:24:35.378161   32156 system_pods.go:61] "kube-proxy-pv5p5" [08e6223f-0c5c-47bd-b37d-67f279f4d4be] Running
	I0811 23:24:35.378169   32156 system_pods.go:61] "kube-scheduler-multinode-618164" [b2a96d9a-e022-4abd-b8c6-e6ec3102773f] Running
	I0811 23:24:35.378176   32156 system_pods.go:61] "storage-provisioner" [84ba55f6-4725-46ae-810f-130cbb82dd7f] Running
	I0811 23:24:35.378185   32156 system_pods.go:74] duration metric: took 184.636196ms to wait for pod list to return data ...
	I0811 23:24:35.378196   32156 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:24:35.568633   32156 request.go:628] Waited for 190.369653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:24:35.568710   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:24:35.568718   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.568728   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.568748   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.571469   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:24:35.571512   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.571522   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.571532   32156 round_trippers.go:580]     Audit-Id: e42f39e6-9916-4002-b539-06cfc6cba17e
	I0811 23:24:35.571543   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.571554   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.571567   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.571577   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.571594   32156 round_trippers.go:580]     Content-Length: 261
	I0811 23:24:35.571617   32156 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"892"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"917f0a1c-39f6-4f23-806b-10a0703a649d","resourceVersion":"350","creationTimestamp":"2023-08-11T23:20:27Z"}}]}
	I0811 23:24:35.571798   32156 default_sa.go:45] found service account: "default"
	I0811 23:24:35.571813   32156 default_sa.go:55] duration metric: took 193.611319ms for default service account to be created ...
	I0811 23:24:35.571823   32156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:24:35.769307   32156 request.go:628] Waited for 197.386177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.769371   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:24:35.769379   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.769390   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.769407   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.774853   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:35.774883   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.774893   32156 round_trippers.go:580]     Audit-Id: 1befc1fe-531e-4081-8838-356f524138aa
	I0811 23:24:35.774901   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.774908   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.774916   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.774924   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.774934   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.777324   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82891 chars]
	I0811 23:24:35.780807   32156 system_pods.go:86] 12 kube-system pods found
	I0811 23:24:35.780828   32156 system_pods.go:89] "coredns-5d78c9869d-zrmf9" [c3c83ae1-ae12-4872-9c78-4aff9f1cefe4] Running
	I0811 23:24:35.780834   32156 system_pods.go:89] "etcd-multinode-618164" [543135b3-5e52-43aa-af7c-1fea5cfb95b6] Running
	I0811 23:24:35.780838   32156 system_pods.go:89] "kindnet-clfqj" [b3e12c4b-402f-467b-a1f2-f7db2ae3d0ef] Running
	I0811 23:24:35.780841   32156 system_pods.go:89] "kindnet-m2c5t" [5264f13e-c667-4d82-912f-49c23eaf31cd] Running
	I0811 23:24:35.780845   32156 system_pods.go:89] "kindnet-szdxp" [d827d201-1ae4-4db8-858f-0fda601d5c40] Running
	I0811 23:24:35.780849   32156 system_pods.go:89] "kube-apiserver-multinode-618164" [a1145d9b-2c2a-42b1-bbe6-142472dc9d01] Running
	I0811 23:24:35.780854   32156 system_pods.go:89] "kube-controller-manager-multinode-618164" [41f34044-7115-493f-94d8-53f69fd37242] Running
	I0811 23:24:35.780858   32156 system_pods.go:89] "kube-proxy-9ldtq" [ff783df6-3af7-44cf-bc60-843db8420efa] Running
	I0811 23:24:35.780862   32156 system_pods.go:89] "kube-proxy-glw45" [4616f16f-9566-447c-90cd-8e37c18508e3] Running
	I0811 23:24:35.780868   32156 system_pods.go:89] "kube-proxy-pv5p5" [08e6223f-0c5c-47bd-b37d-67f279f4d4be] Running
	I0811 23:24:35.780872   32156 system_pods.go:89] "kube-scheduler-multinode-618164" [b2a96d9a-e022-4abd-b8c6-e6ec3102773f] Running
	I0811 23:24:35.780878   32156 system_pods.go:89] "storage-provisioner" [84ba55f6-4725-46ae-810f-130cbb82dd7f] Running
	I0811 23:24:35.780883   32156 system_pods.go:126] duration metric: took 209.056156ms to wait for k8s-apps to be running ...
	I0811 23:24:35.780891   32156 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:24:35.780929   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:24:35.795511   32156 system_svc.go:56] duration metric: took 14.610121ms WaitForService to wait for kubelet.
	I0811 23:24:35.795536   32156 kubeadm.go:581] duration metric: took 15.735272927s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:24:35.795553   32156 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:24:35.969004   32156 request.go:628] Waited for 173.360492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0811 23:24:35.969066   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0811 23:24:35.969072   32156 round_trippers.go:469] Request Headers:
	I0811 23:24:35.969081   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:35.969099   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:24:35.972347   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:24:35.972371   32156 round_trippers.go:577] Response Headers:
	I0811 23:24:35.972381   32156 round_trippers.go:580]     Audit-Id: ecf843d8-a83f-4a75-9e0d-626497b2f5fd
	I0811 23:24:35.972395   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:35.972403   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:35.972413   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:24:35.972423   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:24:35.972435   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:35 GMT
	I0811 23:24:35.972823   32156 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13542 chars]
	I0811 23:24:35.973334   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:35.973350   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:35.973359   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:35.973363   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:35.973366   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:24:35.973369   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:24:35.973372   32156 node_conditions.go:105] duration metric: took 177.812858ms to run NodePressure ...
	I0811 23:24:35.973381   32156 start.go:228] waiting for startup goroutines ...
	I0811 23:24:35.973390   32156 start.go:233] waiting for cluster config update ...
	I0811 23:24:35.973396   32156 start.go:242] writing updated cluster config ...
	I0811 23:24:35.973816   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:24:35.973902   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:24:35.976929   32156 out.go:177] * Starting worker node multinode-618164-m02 in cluster multinode-618164
	I0811 23:24:35.978578   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:24:35.978605   32156 cache.go:57] Caching tarball of preloaded images
	I0811 23:24:35.978714   32156 preload.go:174] Found /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0811 23:24:35.978730   32156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0811 23:24:35.978829   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:24:35.978998   32156 start.go:365] acquiring machines lock for multinode-618164-m02: {Name:mk5e6cee1d1e9195cd61b1fff8d9384d7220567d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0811 23:24:35.979041   32156 start.go:369] acquired machines lock for "multinode-618164-m02" in 23.215µs
	I0811 23:24:35.979058   32156 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:24:35.979067   32156 fix.go:54] fixHost starting: m02
	I0811 23:24:35.979362   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:24:35.979386   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:24:35.993765   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0811 23:24:35.994154   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:24:35.994621   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:24:35.994641   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:24:35.994936   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:24:35.995095   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:35.995252   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetState
	I0811 23:24:35.996775   32156 fix.go:102] recreateIfNeeded on multinode-618164-m02: state=Stopped err=<nil>
	I0811 23:24:35.996795   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	W0811 23:24:35.996971   32156 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:24:35.998957   32156 out.go:177] * Restarting existing kvm2 VM for "multinode-618164-m02" ...
	I0811 23:24:36.000530   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .Start
	I0811 23:24:36.000704   32156 main.go:141] libmachine: (multinode-618164-m02) Ensuring networks are active...
	I0811 23:24:36.001375   32156 main.go:141] libmachine: (multinode-618164-m02) Ensuring network default is active
	I0811 23:24:36.001701   32156 main.go:141] libmachine: (multinode-618164-m02) Ensuring network mk-multinode-618164 is active
	I0811 23:24:36.002092   32156 main.go:141] libmachine: (multinode-618164-m02) Getting domain xml...
	I0811 23:24:36.002832   32156 main.go:141] libmachine: (multinode-618164-m02) Creating domain...
	I0811 23:24:37.220070   32156 main.go:141] libmachine: (multinode-618164-m02) Waiting to get IP...
	I0811 23:24:37.220993   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:37.221369   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:37.221470   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:37.221355   32402 retry.go:31] will retry after 277.268435ms: waiting for machine to come up
	I0811 23:24:37.499821   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:37.500295   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:37.500318   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:37.500248   32402 retry.go:31] will retry after 387.190873ms: waiting for machine to come up
	I0811 23:24:37.888587   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:37.889165   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:37.889188   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:37.889136   32402 retry.go:31] will retry after 366.432092ms: waiting for machine to come up
	I0811 23:24:38.256533   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:38.256993   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:38.257024   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:38.256934   32402 retry.go:31] will retry after 391.941627ms: waiting for machine to come up
	I0811 23:24:38.650579   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:38.650997   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:38.651027   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:38.650941   32402 retry.go:31] will retry after 680.694158ms: waiting for machine to come up
	I0811 23:24:39.332856   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:39.333304   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:39.333387   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:39.333268   32402 retry.go:31] will retry after 868.271634ms: waiting for machine to come up
	I0811 23:24:40.203328   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:40.203706   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:40.203748   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:40.203650   32402 retry.go:31] will retry after 997.014712ms: waiting for machine to come up
	I0811 23:24:41.202277   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:41.202642   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:41.202670   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:41.202590   32402 retry.go:31] will retry after 1.410631845s: waiting for machine to come up
	I0811 23:24:42.615487   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:42.615972   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:42.616014   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:42.615931   32402 retry.go:31] will retry after 1.553384999s: waiting for machine to come up
	I0811 23:24:44.171644   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:44.172128   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:44.172154   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:44.172083   32402 retry.go:31] will retry after 2.193325027s: waiting for machine to come up
	I0811 23:24:46.366732   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:46.367241   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:46.367271   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:46.367187   32402 retry.go:31] will retry after 2.303211004s: waiting for machine to come up
	I0811 23:24:48.672552   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:48.673089   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:48.673117   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:48.673037   32402 retry.go:31] will retry after 3.562523492s: waiting for machine to come up
	I0811 23:24:52.237381   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:52.237950   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | unable to find current IP address of domain multinode-618164-m02 in network mk-multinode-618164
	I0811 23:24:52.237976   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | I0811 23:24:52.237911   32402 retry.go:31] will retry after 3.340176602s: waiting for machine to come up
	I0811 23:24:55.582334   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.582750   32156 main.go:141] libmachine: (multinode-618164-m02) Found IP for machine: 192.168.39.254
	I0811 23:24:55.582782   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has current primary IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.582790   32156 main.go:141] libmachine: (multinode-618164-m02) Reserving static IP address...
	I0811 23:24:55.583220   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "multinode-618164-m02", mac: "52:54:00:d3:12:e8", ip: "192.168.39.254"} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.583243   32156 main.go:141] libmachine: (multinode-618164-m02) Reserved static IP address: 192.168.39.254
	I0811 23:24:55.583255   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | skip adding static IP to network mk-multinode-618164 - found existing host DHCP lease matching {name: "multinode-618164-m02", mac: "52:54:00:d3:12:e8", ip: "192.168.39.254"}
	I0811 23:24:55.583266   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | Getting to WaitForSSH function...
	I0811 23:24:55.583273   32156 main.go:141] libmachine: (multinode-618164-m02) Waiting for SSH to be available...
	I0811 23:24:55.585360   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.585819   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.585852   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.585962   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | Using SSH client type: external
	I0811 23:24:55.585985   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa (-rw-------)
	I0811 23:24:55.586015   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0811 23:24:55.586029   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | About to run SSH command:
	I0811 23:24:55.586045   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | exit 0
	I0811 23:24:55.674828   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | SSH cmd err, output: <nil>: 
	I0811 23:24:55.675253   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetConfigRaw
	I0811 23:24:55.675916   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:24:55.678425   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.678834   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.678875   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.679160   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:24:55.679394   32156 machine.go:88] provisioning docker machine ...
	I0811 23:24:55.679414   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:55.679607   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetMachineName
	I0811 23:24:55.679774   32156 buildroot.go:166] provisioning hostname "multinode-618164-m02"
	I0811 23:24:55.679791   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetMachineName
	I0811 23:24:55.679892   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:55.681946   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.682298   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.682330   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.682431   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:55.682573   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.682733   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.682849   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:55.683015   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:55.683464   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:55.683478   32156 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-618164-m02 && echo "multinode-618164-m02" | sudo tee /etc/hostname
	I0811 23:24:55.817992   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-618164-m02
	
	I0811 23:24:55.818026   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:55.820928   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.821428   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.821472   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.821656   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:55.821835   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.822019   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:55.822171   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:55.822361   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:55.822766   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:55.822784   32156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-618164-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-618164-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-618164-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:24:55.950900   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:24:55.950935   32156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17044-9593/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-9593/.minikube}
	I0811 23:24:55.950951   32156 buildroot.go:174] setting up certificates
	I0811 23:24:55.950961   32156 provision.go:83] configureAuth start
	I0811 23:24:55.950972   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetMachineName
	I0811 23:24:55.951339   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:24:55.954129   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.954518   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.954546   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.954705   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:55.957036   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.957395   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:55.957427   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:55.957524   32156 provision.go:138] copyHostCerts
	I0811 23:24:55.957563   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:24:55.957592   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem, removing ...
	I0811 23:24:55.957601   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:24:55.957661   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem (1078 bytes)
	I0811 23:24:55.957766   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:24:55.957787   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem, removing ...
	I0811 23:24:55.957791   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:24:55.957818   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem (1123 bytes)
	I0811 23:24:55.957860   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:24:55.957874   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem, removing ...
	I0811 23:24:55.957878   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:24:55.957905   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem (1675 bytes)
	I0811 23:24:55.957947   32156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem org=jenkins.multinode-618164-m02 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube multinode-618164-m02]
	I0811 23:24:56.042214   32156 provision.go:172] copyRemoteCerts
	I0811 23:24:56.042266   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:24:56.042285   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.045003   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.045436   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.045470   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.045662   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.045864   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.046035   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.046206   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:56.137954   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:24:56.138021   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0811 23:24:56.162271   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:24:56.162328   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0811 23:24:56.184830   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:24:56.184883   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:24:56.207461   32156 provision.go:86] duration metric: configureAuth took 256.487005ms
	I0811 23:24:56.207492   32156 buildroot.go:189] setting minikube options for container-runtime
	I0811 23:24:56.207719   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:24:56.207746   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:56.208076   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.210511   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.210868   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.210899   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.211053   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.211233   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.211394   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.211516   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.211671   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:56.212043   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:56.212055   32156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 23:24:56.332768   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0811 23:24:56.332790   32156 buildroot.go:70] root file system type: tmpfs
	I0811 23:24:56.332941   32156 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 23:24:56.332964   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.335961   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.336333   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.336364   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.336553   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.336715   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.336902   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.337011   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.337212   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:56.337577   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:56.337635   32156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 23:24:56.467992   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 23:24:56.468034   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:56.470915   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.471303   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:56.471324   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:56.471509   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:56.471683   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.471840   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:56.472023   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:56.472202   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:56.472579   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:56.472597   32156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 23:24:57.318506   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0811 23:24:57.318530   32156 machine.go:91] provisioned docker machine in 1.639122754s
	I0811 23:24:57.318540   32156 start.go:300] post-start starting for "multinode-618164-m02" (driver="kvm2")
	I0811 23:24:57.318549   32156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:24:57.318563   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.318866   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:24:57.318885   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:57.321491   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.321900   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.321931   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.322120   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.322294   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.322465   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.322620   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:57.415404   32156 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:24:57.419740   32156 command_runner.go:130] > NAME=Buildroot
	I0811 23:24:57.419756   32156 command_runner.go:130] > VERSION=2021.02.12-1-gb58903a-dirty
	I0811 23:24:57.419761   32156 command_runner.go:130] > ID=buildroot
	I0811 23:24:57.419766   32156 command_runner.go:130] > VERSION_ID=2021.02.12
	I0811 23:24:57.419771   32156 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0811 23:24:57.419811   32156 info.go:137] Remote host: Buildroot 2021.02.12
	I0811 23:24:57.419823   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/addons for local assets ...
	I0811 23:24:57.419878   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/files for local assets ...
	I0811 23:24:57.419944   32156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> 168362.pem in /etc/ssl/certs
	I0811 23:24:57.419954   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /etc/ssl/certs/168362.pem
	I0811 23:24:57.420027   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:24:57.430951   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:24:57.455810   32156 start.go:303] post-start completed in 137.254169ms
	I0811 23:24:57.455827   32156 fix.go:56] fixHost completed within 21.476760663s
	I0811 23:24:57.455846   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:57.458819   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.459285   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.459319   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.459481   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.459666   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.459880   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.460062   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.460266   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:57.460654   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0811 23:24:57.460674   32156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0811 23:24:57.580010   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691796297.530057331
	
	I0811 23:24:57.580030   32156 fix.go:206] guest clock: 1691796297.530057331
	I0811 23:24:57.580039   32156 fix.go:219] Guest: 2023-08-11 23:24:57.530057331 +0000 UTC Remote: 2023-08-11 23:24:57.455831086 +0000 UTC m=+84.766041720 (delta=74.226245ms)
	I0811 23:24:57.580058   32156 fix.go:190] guest clock delta is within tolerance: 74.226245ms
	I0811 23:24:57.580063   32156 start.go:83] releasing machines lock for "multinode-618164-m02", held for 21.601011459s
	I0811 23:24:57.580087   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.580383   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:24:57.582794   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.583139   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.583182   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.585603   32156 out.go:177] * Found network options:
	I0811 23:24:57.587391   32156 out.go:177]   - NO_PROXY=192.168.39.6
	W0811 23:24:57.589014   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:24:57.589065   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.589601   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.589779   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:24:57.589859   32156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:24:57.589895   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	W0811 23:24:57.589954   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:24:57.590035   32156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:24:57.590057   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:24:57.592408   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.592824   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.592857   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.592888   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.593056   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.593245   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.593291   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:24:57.593320   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:24:57.593399   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.593467   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:24:57.593544   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:57.593644   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:24:57.593790   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:24:57.593920   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:24:57.703458   32156 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:24:57.703747   32156 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0811 23:24:57.703788   32156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0811 23:24:57.703848   32156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:24:57.723142   32156 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0811 23:24:57.725191   32156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:24:57.725205   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:24:57.725317   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:24:57.743962   32156 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0811 23:24:57.744503   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0811 23:24:57.756067   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0811 23:24:57.765986   32156 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0811 23:24:57.766045   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0811 23:24:57.777864   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:24:57.789555   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0811 23:24:57.802111   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:24:57.813823   32156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:24:57.824785   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0811 23:24:57.835526   32156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:24:57.844800   32156 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:24:57.844854   32156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:24:57.854094   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:24:57.959516   32156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0811 23:24:57.977625   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:24:57.977714   32156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0811 23:24:57.996190   32156 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0811 23:24:57.997418   32156 command_runner.go:130] > [Unit]
	I0811 23:24:57.997439   32156 command_runner.go:130] > Description=Docker Application Container Engine
	I0811 23:24:57.997449   32156 command_runner.go:130] > Documentation=https://docs.docker.com
	I0811 23:24:57.997458   32156 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0811 23:24:57.997466   32156 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0811 23:24:57.997475   32156 command_runner.go:130] > StartLimitBurst=3
	I0811 23:24:57.997482   32156 command_runner.go:130] > StartLimitIntervalSec=60
	I0811 23:24:57.997491   32156 command_runner.go:130] > [Service]
	I0811 23:24:57.997497   32156 command_runner.go:130] > Type=notify
	I0811 23:24:57.997504   32156 command_runner.go:130] > Restart=on-failure
	I0811 23:24:57.997508   32156 command_runner.go:130] > Environment=NO_PROXY=192.168.39.6
	I0811 23:24:57.997516   32156 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 23:24:57.997528   32156 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 23:24:57.997542   32156 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 23:24:57.997553   32156 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0811 23:24:57.997568   32156 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 23:24:57.997581   32156 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 23:24:57.997592   32156 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 23:24:57.997603   32156 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 23:24:57.997609   32156 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 23:24:57.997615   32156 command_runner.go:130] > ExecStart=
	I0811 23:24:57.997640   32156 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0811 23:24:57.997656   32156 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 23:24:57.997668   32156 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 23:24:57.997679   32156 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 23:24:57.997689   32156 command_runner.go:130] > LimitNOFILE=infinity
	I0811 23:24:57.997697   32156 command_runner.go:130] > LimitNPROC=infinity
	I0811 23:24:57.997704   32156 command_runner.go:130] > LimitCORE=infinity
	I0811 23:24:57.997710   32156 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0811 23:24:57.997721   32156 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0811 23:24:57.997728   32156 command_runner.go:130] > TasksMax=infinity
	I0811 23:24:57.997735   32156 command_runner.go:130] > TimeoutStartSec=0
	I0811 23:24:57.997750   32156 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 23:24:57.997759   32156 command_runner.go:130] > Delegate=yes
	I0811 23:24:57.997769   32156 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0811 23:24:57.997779   32156 command_runner.go:130] > KillMode=process
	I0811 23:24:57.997789   32156 command_runner.go:130] > [Install]
	I0811 23:24:57.997799   32156 command_runner.go:130] > WantedBy=multi-user.target
	I0811 23:24:57.997935   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:24:58.012870   32156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:24:58.036552   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:24:58.048720   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:24:58.061194   32156 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0811 23:24:58.091338   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:24:58.104438   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:24:58.122668   32156 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0811 23:24:58.122748   32156 ssh_runner.go:195] Run: which cri-dockerd
	I0811 23:24:58.126711   32156 command_runner.go:130] > /usr/bin/cri-dockerd
	I0811 23:24:58.126833   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0811 23:24:58.135972   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0811 23:24:58.151600   32156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0811 23:24:58.254570   32156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0811 23:24:58.362147   32156 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0811 23:24:58.362179   32156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0811 23:24:58.378397   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:24:58.481314   32156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0811 23:24:59.925856   32156 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.444492914s)
	I0811 23:24:59.925935   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:00.032330   32156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0811 23:25:00.140195   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:00.242439   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:00.345593   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0811 23:25:00.361867   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:00.471574   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0811 23:25:00.551023   32156 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0811 23:25:00.551086   32156 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0811 23:25:00.556986   32156 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0811 23:25:00.557008   32156 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0811 23:25:00.557017   32156 command_runner.go:130] > Device: 16h/22d	Inode: 853         Links: 1
	I0811 23:25:00.557028   32156 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0811 23:25:00.557043   32156 command_runner.go:130] > Access: 2023-08-11 23:25:00.435549902 +0000
	I0811 23:25:00.557050   32156 command_runner.go:130] > Modify: 2023-08-11 23:25:00.435549902 +0000
	I0811 23:25:00.557056   32156 command_runner.go:130] > Change: 2023-08-11 23:25:00.437549902 +0000
	I0811 23:25:00.557060   32156 command_runner.go:130] >  Birth: -
	I0811 23:25:00.557116   32156 start.go:534] Will wait 60s for crictl version
	I0811 23:25:00.557156   32156 ssh_runner.go:195] Run: which crictl
	I0811 23:25:00.560727   32156 command_runner.go:130] > /usr/bin/crictl
	I0811 23:25:00.560790   32156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:25:00.604460   32156 command_runner.go:130] > Version:  0.1.0
	I0811 23:25:00.604493   32156 command_runner.go:130] > RuntimeName:  docker
	I0811 23:25:00.604498   32156 command_runner.go:130] > RuntimeVersion:  24.0.4
	I0811 23:25:00.604504   32156 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0811 23:25:00.605908   32156 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0811 23:25:00.605970   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:25:00.634171   32156 command_runner.go:130] > 24.0.4
	I0811 23:25:00.635418   32156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0811 23:25:00.662568   32156 command_runner.go:130] > 24.0.4
	I0811 23:25:00.665312   32156 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0811 23:25:00.667164   32156 out.go:177]   - env NO_PROXY=192.168.39.6
	I0811 23:25:00.669019   32156 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:25:00.671807   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:25:00.672171   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:25:00.672206   32156 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:25:00.672386   32156 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0811 23:25:00.676532   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:25:00.689316   32156 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164 for IP: 192.168.39.254
	I0811 23:25:00.689349   32156 certs.go:190] acquiring lock for shared ca certs: {Name:mke12ed30faa4458f68c7f1069767b7834c8a1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:25:00.689497   32156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key
	I0811 23:25:00.689540   32156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key
	I0811 23:25:00.689554   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:25:00.689568   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:25:00.689580   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:25:00.689590   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:25:00.689644   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem (1338 bytes)
	W0811 23:25:00.689670   32156 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836_empty.pem, impossibly tiny 0 bytes
	I0811 23:25:00.689681   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem (1679 bytes)
	I0811 23:25:00.689703   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem (1078 bytes)
	I0811 23:25:00.689725   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:25:00.689747   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem (1675 bytes)
	I0811 23:25:00.689789   32156 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:25:00.689811   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.689823   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem -> /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.689836   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.690135   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:25:00.715861   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 23:25:00.738775   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:25:00.761747   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0811 23:25:00.784796   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:25:00.807516   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/16836.pem --> /usr/share/ca-certificates/16836.pem (1338 bytes)
	I0811 23:25:00.830089   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /usr/share/ca-certificates/168362.pem (1708 bytes)
	I0811 23:25:00.853967   32156 ssh_runner.go:195] Run: openssl version
	I0811 23:25:00.859517   32156 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0811 23:25:00.859584   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168362.pem && ln -fs /usr/share/ca-certificates/168362.pem /etc/ssl/certs/168362.pem"
	I0811 23:25:00.869741   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.874542   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.874614   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:07 /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.874666   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168362.pem
	I0811 23:25:00.880067   32156 command_runner.go:130] > 3ec20f2e
	I0811 23:25:00.880258   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168362.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:25:00.890270   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:25:00.900196   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.904853   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.904882   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.904918   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:00.910034   32156 command_runner.go:130] > b5213941
	I0811 23:25:00.910094   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:25:00.919484   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16836.pem && ln -fs /usr/share/ca-certificates/16836.pem /etc/ssl/certs/16836.pem"
	I0811 23:25:00.930148   32156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.934711   32156 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.934842   32156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:07 /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.934888   32156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16836.pem
	I0811 23:25:00.940182   32156 command_runner.go:130] > 51391683
	I0811 23:25:00.940487   32156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16836.pem /etc/ssl/certs/51391683.0"
	I0811 23:25:00.950772   32156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:25:00.954768   32156 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:25:00.954803   32156 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:25:00.954880   32156 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0811 23:25:00.981976   32156 command_runner.go:130] > cgroupfs
	I0811 23:25:00.982131   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:25:00.982144   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:25:00.982158   32156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:25:00.982192   32156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-618164 NodeName:multinode-618164-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:25:00.982394   32156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-618164-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:25:00.982484   32156 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-618164-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:25:00.982596   32156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:25:00.992566   32156 command_runner.go:130] > kubeadm
	I0811 23:25:00.992586   32156 command_runner.go:130] > kubectl
	I0811 23:25:00.992592   32156 command_runner.go:130] > kubelet
	I0811 23:25:00.992611   32156 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:25:00.992666   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0811 23:25:01.004031   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0811 23:25:01.021956   32156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:25:01.039960   32156 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0811 23:25:01.044057   32156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:25:01.056054   32156 host.go:66] Checking if "multinode-618164" exists ...
	I0811 23:25:01.056422   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:25:01.056496   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:25:01.056530   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:25:01.071673   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0811 23:25:01.072083   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:25:01.072625   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:25:01.072644   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:25:01.072958   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:25:01.073142   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:25:01.073338   32156 start.go:301] JoinCluster: &{Name:multinode-618164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.4 ClusterName:multinode-618164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:25:01.073461   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0811 23:25:01.073476   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:25:01.076278   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.076678   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:25:01.076709   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.076873   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:25:01.077028   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:25:01.077195   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:25:01.077360   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:25:01.243353   32156 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sdlilv.uc4mjftwwn2c18uw --discovery-token-ca-cert-hash sha256:bf28045c66954787868571c8676d98e04ae92922baabe0a4e5f5bbb1aa371548 
	I0811 23:25:01.244944   32156 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:01.244979   32156 host.go:66] Checking if "multinode-618164" exists ...
	I0811 23:25:01.245257   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:25:01.245280   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:25:01.259938   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0811 23:25:01.260403   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:25:01.260864   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:25:01.260883   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:25:01.261300   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:25:01.261473   32156 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:25:01.261680   32156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-618164-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0811 23:25:01.261703   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:25:01.264761   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.265249   32156 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:23:45 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:25:01.265275   32156 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:25:01.265472   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:25:01.265680   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:25:01.265814   32156 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:25:01.265963   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:25:01.461009   32156 command_runner.go:130] > node/multinode-618164-m02 cordoned
	I0811 23:25:04.501061   32156 command_runner.go:130] > pod "busybox-67b7f59bb-vrdpw" has DeletionTimestamp older than 1 seconds, skipping
	I0811 23:25:04.501221   32156 command_runner.go:130] > node/multinode-618164-m02 drained
	I0811 23:25:04.503146   32156 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0811 23:25:04.503164   32156 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-m2c5t, kube-system/kube-proxy-9ldtq
	I0811 23:25:04.503189   32156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-618164-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.241483996s)
	I0811 23:25:04.503211   32156 node.go:108] successfully drained node "m02"
	I0811 23:25:04.503539   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:25:04.503745   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:04.504023   32156 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0811 23:25:04.504062   32156 round_trippers.go:463] DELETE https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:04.504074   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:04.504081   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:04.504087   32156 round_trippers.go:473]     Content-Type: application/json
	I0811 23:25:04.504093   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:04.509720   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:25:04.509746   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:04.509756   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:04.509762   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:04.509768   32156 round_trippers.go:580]     Content-Length: 171
	I0811 23:25:04.509773   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:04 GMT
	I0811 23:25:04.509779   32156 round_trippers.go:580]     Audit-Id: 9e43768e-f498-44cb-89dc-762be69ad47a
	I0811 23:25:04.509784   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:04.509792   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:04.509823   32156 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-618164-m02","kind":"nodes","uid":"5117de97-d432-4fe0-baad-4ef71b0a5470"}}
	I0811 23:25:04.509905   32156 node.go:124] successfully deleted node "m02"
	I0811 23:25:04.509931   32156 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:04.509958   32156 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:04.509987   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sdlilv.uc4mjftwwn2c18uw --discovery-token-ca-cert-hash sha256:bf28045c66954787868571c8676d98e04ae92922baabe0a4e5f5bbb1aa371548 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-618164-m02"
	I0811 23:25:04.624388   32156 command_runner.go:130] ! W0811 23:25:04.574034    1146 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0811 23:25:04.867134   32156 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:25:06.582453   32156 command_runner.go:130] > [preflight] Running pre-flight checks
	I0811 23:25:06.582482   32156 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0811 23:25:06.582495   32156 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0811 23:25:06.582511   32156 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:25:06.582522   32156 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:25:06.582531   32156 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0811 23:25:06.582542   32156 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0811 23:25:06.582559   32156 command_runner.go:130] > This node has joined the cluster:
	I0811 23:25:06.582575   32156 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0811 23:25:06.582587   32156 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0811 23:25:06.582601   32156 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0811 23:25:06.582624   32156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sdlilv.uc4mjftwwn2c18uw --discovery-token-ca-cert-hash sha256:bf28045c66954787868571c8676d98e04ae92922baabe0a4e5f5bbb1aa371548 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-618164-m02": (2.072621514s)
	I0811 23:25:06.582655   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0811 23:25:06.765410   32156 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0811 23:25:06.918249   32156 start.go:303] JoinCluster complete in 5.844904448s
	I0811 23:25:06.918276   32156 cni.go:84] Creating CNI manager for ""
	I0811 23:25:06.918282   32156 cni.go:136] 3 nodes found, recommending kindnet
	I0811 23:25:06.918333   32156 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:25:06.924190   32156 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0811 23:25:06.924215   32156 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0811 23:25:06.924224   32156 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0811 23:25:06.924234   32156 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:25:06.924247   32156 command_runner.go:130] > Access: 2023-08-11 23:23:45.638456579 +0000
	I0811 23:25:06.924258   32156 command_runner.go:130] > Modify: 2023-08-01 03:01:17.000000000 +0000
	I0811 23:25:06.924267   32156 command_runner.go:130] > Change: 2023-08-11 23:23:43.758456579 +0000
	I0811 23:25:06.924274   32156 command_runner.go:130] >  Birth: -
	I0811 23:25:06.924647   32156 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:25:06.924671   32156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:25:06.946592   32156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:25:07.323185   32156 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:25:07.327939   32156 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:25:07.332437   32156 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0811 23:25:07.344644   32156 command_runner.go:130] > daemonset.apps/kindnet configured
	I0811 23:25:07.347483   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:25:07.347741   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:07.348007   32156 round_trippers.go:463] GET https://192.168.39.6:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:25:07.348020   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.348032   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.348040   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.350938   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:07.350953   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.350960   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.350968   32156 round_trippers.go:580]     Content-Length: 291
	I0811 23:25:07.350973   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.350981   32156 round_trippers.go:580]     Audit-Id: fbeaad18-59e9-4540-831e-38b3610091fd
	I0811 23:25:07.350990   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.351004   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.351014   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.351174   32156 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31aef6c0-c84e-4384-9e6e-68f0c22e59ba","resourceVersion":"888","creationTimestamp":"2023-08-11T23:20:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 23:25:07.351269   32156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-618164" context rescaled to 1 replicas
	I0811 23:25:07.351302   32156 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0811 23:25:07.353585   32156 out.go:177] * Verifying Kubernetes components...
	I0811 23:25:07.355080   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:07.385575   32156 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:25:07.385774   32156 kapi.go:59] client config for multinode-618164: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/client.key", CAFile:"/home/jenkins/minikube-integration/17044-9593/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:07.385976   32156 node_ready.go:35] waiting up to 6m0s for node "multinode-618164-m02" to be "Ready" ...
	I0811 23:25:07.386027   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:07.386033   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.386041   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.386049   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.389030   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:07.389056   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.389068   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.389077   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.389087   32156 round_trippers.go:580]     Content-Length: 4030
	I0811 23:25:07.389099   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.389107   32156 round_trippers.go:580]     Audit-Id: 471c9814-2221-4b46-9879-4076ecbff85f
	I0811 23:25:07.389119   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.389131   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.389227   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"948","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I0811 23:25:07.389580   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:07.389597   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.389608   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.389623   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.392669   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:07.392690   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.392700   32156 round_trippers.go:580]     Audit-Id: 175875ca-82b9-4448-a10c-d03144ec513f
	I0811 23:25:07.392709   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.392718   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.392730   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.392742   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.392753   32156 round_trippers.go:580]     Content-Length: 4030
	I0811 23:25:07.392765   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.392810   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"948","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I0811 23:25:07.893637   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:07.893665   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:07.893677   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.893687   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:07.900751   32156 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0811 23:25:07.900780   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:07.900793   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:07.900803   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:07.900812   32156 round_trippers.go:580]     Content-Length: 4030
	I0811 23:25:07.900821   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.900836   32156 round_trippers.go:580]     Audit-Id: 54d102be-0404-4b70-a674-e755b192b2c4
	I0811 23:25:07.900845   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.900856   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.900943   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"948","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I0811 23:25:08.393328   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:08.393351   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:08.393359   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:08.393371   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:08.396233   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:08.396258   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:08.396269   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:08 GMT
	I0811 23:25:08.396278   32156 round_trippers.go:580]     Audit-Id: 3c7fca75-a854-46c3-ac44-79264082a673
	I0811 23:25:08.396286   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:08.396294   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:08.396302   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:08.396315   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:08.396885   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:08.893528   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:08.893559   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:08.893567   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:08.893574   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:08.896611   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:08.896637   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:08.896648   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:08.896657   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:08 GMT
	I0811 23:25:08.896666   32156 round_trippers.go:580]     Audit-Id: 6368ceab-0ac5-46ff-ab8f-49e28ded3f7e
	I0811 23:25:08.896674   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:08.896686   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:08.896693   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:08.897080   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:09.393741   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:09.393768   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:09.393776   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:09.393787   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:09.396774   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:09.396800   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:09.396810   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:09.396818   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:09.396826   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:09.396834   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:09.396842   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:09 GMT
	I0811 23:25:09.396849   32156 round_trippers.go:580]     Audit-Id: 798dcfa2-7eb3-45a1-bc0d-b59597c0b9db
	I0811 23:25:09.397106   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:09.397422   32156 node_ready.go:58] node "multinode-618164-m02" has status "Ready":"False"
	I0811 23:25:09.893830   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:09.893853   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:09.893861   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:09.893868   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:09.896883   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:09.896901   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:09.896911   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:09.896921   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:09.896929   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:09.896938   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:09 GMT
	I0811 23:25:09.896948   32156 round_trippers.go:580]     Audit-Id: 3fdae358-13e1-4e53-834a-dcfd607e9e61
	I0811 23:25:09.896956   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:09.897089   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:10.393805   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:10.393827   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:10.393835   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:10.393841   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:10.397254   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:10.397275   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:10.397282   32156 round_trippers.go:580]     Audit-Id: 8aa3aaf2-0bac-4908-b0d7-58d75470f4a8
	I0811 23:25:10.397288   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:10.397293   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:10.397335   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:10.397370   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:10.397380   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:10 GMT
	I0811 23:25:10.397480   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:10.894138   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:10.894166   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:10.894179   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:10.894189   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:10.896893   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:10.896913   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:10.896919   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:10.896925   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:10.896930   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:10.896936   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:10.896941   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:10 GMT
	I0811 23:25:10.896947   32156 round_trippers.go:580]     Audit-Id: a0318b3e-e9c5-4a9c-8e51-8bda17281db1
	I0811 23:25:10.897076   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:11.393608   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:11.393629   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:11.393637   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:11.393649   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:11.396575   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:11.396601   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:11.396612   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:11.396622   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:11.396637   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:11.396650   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:11 GMT
	I0811 23:25:11.396662   32156 round_trippers.go:580]     Audit-Id: ace06413-538f-4e04-b1b3-2bddf01ae167
	I0811 23:25:11.396679   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:11.396841   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:11.893455   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:11.893477   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:11.893486   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:11.893492   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:11.896342   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:11.896363   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:11.896372   32156 round_trippers.go:580]     Audit-Id: b6d50b57-164f-4241-bdb3-7ba59d31e439
	I0811 23:25:11.896381   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:11.896391   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:11.896400   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:11.896410   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:11.896415   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:11 GMT
	I0811 23:25:11.896962   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:11.897198   32156 node_ready.go:58] node "multinode-618164-m02" has status "Ready":"False"
	I0811 23:25:12.393623   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:12.393645   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:12.393653   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:12.393659   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:12.396444   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:12.396467   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:12.396475   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:12.396481   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:12 GMT
	I0811 23:25:12.396490   32156 round_trippers.go:580]     Audit-Id: 3ee8a92d-c349-413f-8555-0c8345e4cf6a
	I0811 23:25:12.396499   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:12.396513   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:12.396521   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:12.396804   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:12.893775   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:12.893803   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:12.893817   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:12.893826   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:12.896956   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:12.896974   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:12.896981   32156 round_trippers.go:580]     Audit-Id: 710d49be-f667-4e15-845b-f361c8c33534
	I0811 23:25:12.896986   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:12.896992   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:12.896997   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:12.897002   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:12.897008   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:12 GMT
	I0811 23:25:12.897278   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:13.393997   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:13.394018   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:13.394026   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:13.394035   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:13.396987   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:13.397014   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:13.397023   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:13.397031   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:13 GMT
	I0811 23:25:13.397039   32156 round_trippers.go:580]     Audit-Id: c61542df-08dc-4262-bc1b-77d08e94ea5d
	I0811 23:25:13.397046   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:13.397053   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:13.397062   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:13.397424   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:13.894143   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:13.894170   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:13.894180   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:13.894189   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:13.897035   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:13.897059   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:13.897067   32156 round_trippers.go:580]     Audit-Id: c9cbd9cd-0056-470f-8a2d-d1dd19a1ae34
	I0811 23:25:13.897072   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:13.897078   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:13.897083   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:13.897088   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:13.897100   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:13 GMT
	I0811 23:25:13.897502   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:13.897746   32156 node_ready.go:58] node "multinode-618164-m02" has status "Ready":"False"
	I0811 23:25:14.394274   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:14.394294   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:14.394308   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:14.394317   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:14.397448   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:14.397469   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:14.397476   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:14.397482   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:14.397489   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:14 GMT
	I0811 23:25:14.397498   32156 round_trippers.go:580]     Audit-Id: f99bf46e-985a-4acf-ad1f-08c3f416c36b
	I0811 23:25:14.397507   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:14.397519   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:14.397603   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:14.894154   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:14.894177   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:14.894185   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:14.894192   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:14.897012   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:14.897037   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:14.897045   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:14.897051   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:14.897057   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:14.897062   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:14 GMT
	I0811 23:25:14.897068   32156 round_trippers.go:580]     Audit-Id: 6cc2af9e-9735-4ac7-b1b2-5ad0ab264d78
	I0811 23:25:14.897076   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:14.897325   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:15.393575   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:15.393597   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.393605   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.393611   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.396566   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.396590   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.396604   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.396614   32156 round_trippers.go:580]     Audit-Id: 4ff194c2-2cb4-4264-bca0-93526a661c22
	I0811 23:25:15.396620   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.396626   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.396631   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.396636   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.396981   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"965","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I0811 23:25:15.893712   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:15.893743   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.893755   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.893765   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.896571   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.896597   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.896608   32156 round_trippers.go:580]     Audit-Id: 41731806-499a-49ba-9460-e35fd2480c15
	I0811 23:25:15.896617   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.896627   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.896634   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.896643   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.896651   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.896825   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"978","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3373 chars]
	I0811 23:25:15.897161   32156 node_ready.go:49] node "multinode-618164-m02" has status "Ready":"True"
	I0811 23:25:15.897183   32156 node_ready.go:38] duration metric: took 8.511193902s waiting for node "multinode-618164-m02" to be "Ready" ...
	I0811 23:25:15.897195   32156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:15.897275   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:15.897293   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.897303   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.897315   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.901284   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.901302   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.901311   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.901320   32156 round_trippers.go:580]     Audit-Id: 7a3587ac-9b0b-4554-b6d0-11eee67a8dad
	I0811 23:25:15.901329   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.901343   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.901356   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.901373   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.903176   32156 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"978"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83751 chars]
	I0811 23:25:15.905648   32156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.905706   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zrmf9
	I0811 23:25:15.905714   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.905726   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.905734   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.908956   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.908972   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.908981   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.908990   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.909001   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.909015   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.909024   32156 round_trippers.go:580]     Audit-Id: d6a7308f-652e-4eaa-b3e5-6386e018f45d
	I0811 23:25:15.909037   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.909771   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zrmf9","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c3c83ae1-ae12-4872-9c78-4aff9f1cefe4","resourceVersion":"884","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"f8fa5d1a-2f05-462f-a491-7eee14eeba89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8fa5d1a-2f05-462f-a491-7eee14eeba89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
	I0811 23:25:15.910175   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.910188   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.910198   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.910207   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.912761   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.912780   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.912791   32156 round_trippers.go:580]     Audit-Id: 4f1b80c5-dd9e-4d0d-8481-f6d57d3ac4f5
	I0811 23:25:15.912799   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.912806   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.912814   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.912823   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.912831   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.912947   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.913206   32156 pod_ready.go:92] pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.913221   32156 pod_ready.go:81] duration metric: took 7.555485ms waiting for pod "coredns-5d78c9869d-zrmf9" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.913228   32156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.913270   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-618164
	I0811 23:25:15.913278   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.913284   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.913290   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.918898   32156 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:25:15.918913   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.918920   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.918926   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.918931   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.918944   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.918954   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.918962   32156 round_trippers.go:580]     Audit-Id: 7b7b060e-aed5-4a41-9fab-fe7573a0071d
	I0811 23:25:15.919556   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-618164","namespace":"kube-system","uid":"543135b3-5e52-43aa-af7c-1fea5cfb95b6","resourceVersion":"868","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.6:2379","kubernetes.io/config.hash":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.mirror":"c48f92ef7b50cf59a6cd1a2473a2a4ee","kubernetes.io/config.seen":"2023-08-11T23:20:15.427439067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I0811 23:25:15.919914   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.919920   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.919927   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.919933   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.922982   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.922996   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.923003   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.923008   32156 round_trippers.go:580]     Audit-Id: c812d25f-86f6-46ee-8102-24276fa1d562
	I0811 23:25:15.923016   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.923025   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.923040   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.923050   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.923414   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.923675   32156 pod_ready.go:92] pod "etcd-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.923687   32156 pod_ready.go:81] duration metric: took 10.454198ms waiting for pod "etcd-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.923702   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.923739   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-618164
	I0811 23:25:15.923746   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.923753   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.923764   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.925913   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.925927   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.925934   32156 round_trippers.go:580]     Audit-Id: 1d363306-d32e-4b72-8ad0-bc0fe96b8f6b
	I0811 23:25:15.925939   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.925945   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.925953   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.925962   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.925971   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.926233   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-618164","namespace":"kube-system","uid":"a1145d9b-2c2a-42b1-bbe6-142472dc9d01","resourceVersion":"870","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.6:8443","kubernetes.io/config.hash":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.mirror":"f0707583abef3bd312ad889b26693949","kubernetes.io/config.seen":"2023-08-11T23:20:15.427440318Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7597 chars]
	I0811 23:25:15.926575   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.926584   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.926591   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.926596   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.928579   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:25:15.928597   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.928606   32156 round_trippers.go:580]     Audit-Id: be77d699-874e-43b4-8864-aacf648c5177
	I0811 23:25:15.928617   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.928625   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.928637   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.928650   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.928661   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.928828   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.929170   32156 pod_ready.go:92] pod "kube-apiserver-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.929187   32156 pod_ready.go:81] duration metric: took 5.480071ms waiting for pod "kube-apiserver-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.929195   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.929232   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-618164
	I0811 23:25:15.929240   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.929247   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.929253   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.931219   32156 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:25:15.931233   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.931240   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.931249   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.931255   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.931261   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.931266   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.931274   32156 round_trippers.go:580]     Audit-Id: 75b0e306-c810-44fb-8093-c26601b86a5d
	I0811 23:25:15.931407   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-618164","namespace":"kube-system","uid":"41f34044-7115-493f-94d8-53f69fd37242","resourceVersion":"848","creationTimestamp":"2023-08-11T23:20:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.mirror":"907d55e95bad6f7d40e8e4ad73117c90","kubernetes.io/config.seen":"2023-08-11T23:20:06.002920339Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7170 chars]
	I0811 23:25:15.932122   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:15.932143   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:15.932153   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.932163   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:15.935309   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:15.935330   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:15.935339   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:15.935348   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:15.935357   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.935366   32156 round_trippers.go:580]     Audit-Id: 3c5019fc-a508-43fc-97d5-67ed618ae270
	I0811 23:25:15.935380   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.935391   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.935469   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:15.935726   32156 pod_ready.go:92] pod "kube-controller-manager-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:15.935738   32156 pod_ready.go:81] duration metric: took 6.537435ms waiting for pod "kube-controller-manager-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:15.935746   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.093703   32156 request.go:628] Waited for 157.871057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:25:16.093764   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ldtq
	I0811 23:25:16.093769   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.093776   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.093783   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.096959   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.096985   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.096993   32156 round_trippers.go:580]     Audit-Id: d88fbbe6-3d4b-4920-ab36-983844986cd9
	I0811 23:25:16.096999   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.097004   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.097011   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.097017   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.097023   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.097293   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9ldtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff783df6-3af7-44cf-bc60-843db8420efa","resourceVersion":"954","creationTimestamp":"2023-08-11T23:21:15Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:21:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0811 23:25:16.294014   32156 request.go:628] Waited for 196.32982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:16.294066   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m02
	I0811 23:25:16.294071   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.294090   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.294096   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.297532   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.297551   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.297558   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.297564   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.297569   32156 round_trippers.go:580]     Audit-Id: e8c12ee1-02d9-4995-83cb-640bc1424a46
	I0811 23:25:16.297574   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.297582   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.297591   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.297744   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m02","uid":"10708ab3-0eee-4255-ab45-5af2662d444a","resourceVersion":"978","creationTimestamp":"2023-08-11T23:25:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3373 chars]
	I0811 23:25:16.297982   32156 pod_ready.go:92] pod "kube-proxy-9ldtq" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:16.297994   32156 pod_ready.go:81] duration metric: took 362.24345ms waiting for pod "kube-proxy-9ldtq" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.298004   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.494415   32156 request.go:628] Waited for 196.355018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:25:16.494491   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-glw45
	I0811 23:25:16.494501   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.494512   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.494531   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.497665   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.497684   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.497694   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.497704   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.497723   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.497733   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.497745   32156 round_trippers.go:580]     Audit-Id: ade132a3-b98c-4d7e-9232-60bf828aada0
	I0811 23:25:16.497751   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.497897   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-glw45","generateName":"kube-proxy-","namespace":"kube-system","uid":"4616f16f-9566-447c-90cd-8e37c18508e3","resourceVersion":"843","creationTimestamp":"2023-08-11T23:20:27Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I0811 23:25:16.693749   32156 request.go:628] Waited for 195.321196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:16.693801   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:16.693808   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.693820   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.693830   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.696777   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:16.696813   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.696824   32156 round_trippers.go:580]     Audit-Id: 182cef79-e93b-48c1-8920-ab0da0b7ca2b
	I0811 23:25:16.696830   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.696836   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.696841   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.696847   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.696853   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.697091   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:16.697449   32156 pod_ready.go:92] pod "kube-proxy-glw45" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:16.697464   32156 pod_ready.go:81] duration metric: took 399.4554ms waiting for pod "kube-proxy-glw45" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.697474   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:16.893847   32156 request.go:628] Waited for 196.313905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:25:16.893915   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pv5p5
	I0811 23:25:16.893920   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:16.893928   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.893937   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:16.897219   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:16.897239   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:16.897245   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.897251   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.897257   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:16.897262   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:16.897268   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.897273   32156 round_trippers.go:580]     Audit-Id: 3fdd3b69-b7f8-48a8-8bea-5b227d3cc66e
	I0811 23:25:16.897458   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pv5p5","generateName":"kube-proxy-","namespace":"kube-system","uid":"08e6223f-0c5c-47bd-b37d-67f279f4d4be","resourceVersion":"961","creationTimestamp":"2023-08-11T23:22:07Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7b0c420a-7d21-48f8-a07e-6a10140963bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b0c420a-7d21-48f8-a07e-6a10140963bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5971 chars]
	I0811 23:25:17.093891   32156 request.go:628] Waited for 196.00394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:25:17.093947   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164-m03
	I0811 23:25:17.093965   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.093977   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.093987   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.096456   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:17.096474   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.096480   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.096486   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.096491   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.096497   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.096502   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.096508   32156 round_trippers.go:580]     Audit-Id: 8a9732b4-c5f9-40d3-b23c-0edf85a0fe77
	I0811 23:25:17.096733   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164-m03","uid":"84060722-cb59-478c-9b01-7517a6ae9f59","resourceVersion":"958","creationTimestamp":"2023-08-11T23:22:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3742 chars]
	I0811 23:25:17.097036   32156 pod_ready.go:97] node "multinode-618164-m03" hosting pod "kube-proxy-pv5p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164-m03" has status "Ready":"Unknown"
	I0811 23:25:17.097054   32156 pod_ready.go:81] duration metric: took 399.575386ms waiting for pod "kube-proxy-pv5p5" in "kube-system" namespace to be "Ready" ...
	E0811 23:25:17.097062   32156 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-618164-m03" hosting pod "kube-proxy-pv5p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-618164-m03" has status "Ready":"Unknown"
	I0811 23:25:17.097070   32156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:17.294578   32156 request.go:628] Waited for 197.422569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:25:17.294643   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-618164
	I0811 23:25:17.294650   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.294662   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.294673   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.297453   32156 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:17.297471   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.297478   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.297483   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.297489   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.297494   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.297499   32156 round_trippers.go:580]     Audit-Id: bafeeaaf-2d19-41ba-b27d-364971a80a8f
	I0811 23:25:17.297505   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.297698   32156 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-618164","namespace":"kube-system","uid":"b2a96d9a-e022-4abd-b8c6-e6ec3102773f","resourceVersion":"871","creationTimestamp":"2023-08-11T23:20:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.mirror":"d3d76d9662321b20a9c933331303ec3d","kubernetes.io/config.seen":"2023-08-11T23:20:15.427437689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:20:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I0811 23:25:17.494500   32156 request.go:628] Waited for 196.35493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:17.494562   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/multinode-618164
	I0811 23:25:17.494570   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.494582   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.494591   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.497629   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:17.497648   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.497655   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.497661   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.497670   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.497679   32156 round_trippers.go:580]     Audit-Id: b379217e-d58d-4a8e-83af-37d0faef58c0
	I0811 23:25:17.497688   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.497708   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.497865   32156 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-11T23:20:11Z","fieldsType":"FieldsV1","fi [truncated 5155 chars]
	I0811 23:25:17.498206   32156 pod_ready.go:92] pod "kube-scheduler-multinode-618164" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:17.498221   32156 pod_ready.go:81] duration metric: took 401.140427ms waiting for pod "kube-scheduler-multinode-618164" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:17.498231   32156 pod_ready.go:38] duration metric: took 1.601020252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:17.498248   32156 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:25:17.498294   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:17.511290   32156 system_svc.go:56] duration metric: took 13.036483ms WaitForService to wait for kubelet.
	I0811 23:25:17.511311   32156 kubeadm.go:581] duration metric: took 10.15994815s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:25:17.511333   32156 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:25:17.693680   32156 request.go:628] Waited for 182.289745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0811 23:25:17.693729   32156 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0811 23:25:17.693735   32156 round_trippers.go:469] Request Headers:
	I0811 23:25:17.693744   32156 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.693751   32156 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0811 23:25:17.697024   32156 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:17.697044   32156 round_trippers.go:577] Response Headers:
	I0811 23:25:17.697053   32156 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df39af5d-6c58-422c-9d2e-9d967d5a4e87
	I0811 23:25:17.697061   32156 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.697069   32156 round_trippers.go:580]     Audit-Id: 2ed040bb-4cad-4d2a-bc04-d0e4a9280573
	I0811 23:25:17.697077   32156 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.697085   32156 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.697091   32156 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: adc55b79-a60f-4004-b4c2-5a962b18600f
	I0811 23:25:17.697665   32156 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"981"},"items":[{"metadata":{"name":"multinode-618164","uid":"7e58c314-90d1-4f0a-99d7-b2716a280bf2","resourceVersion":"854","creationTimestamp":"2023-08-11T23:20:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-618164","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-618164","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_20_16_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14307 chars]
	I0811 23:25:17.698209   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:25:17.698224   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:17.698234   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:25:17.698237   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:17.698244   32156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0811 23:25:17.698247   32156 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:17.698252   32156 node_conditions.go:105] duration metric: took 186.915638ms to run NodePressure ...
	I0811 23:25:17.698263   32156 start.go:228] waiting for startup goroutines ...
	I0811 23:25:17.698287   32156 start.go:242] writing updated cluster config ...
	I0811 23:25:17.698695   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:25:17.698823   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:25:17.702527   32156 out.go:177] * Starting worker node multinode-618164-m03 in cluster multinode-618164
	I0811 23:25:17.703970   32156 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:25:17.703991   32156 cache.go:57] Caching tarball of preloaded images
	I0811 23:25:17.704072   32156 preload.go:174] Found /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0811 23:25:17.704083   32156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0811 23:25:17.704186   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:25:17.704337   32156 start.go:365] acquiring machines lock for multinode-618164-m03: {Name:mk5e6cee1d1e9195cd61b1fff8d9384d7220567d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0811 23:25:17.704376   32156 start.go:369] acquired machines lock for "multinode-618164-m03" in 20.954µs
	I0811 23:25:17.704389   32156 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:25:17.704393   32156 fix.go:54] fixHost starting: m03
	I0811 23:25:17.704629   32156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:25:17.704660   32156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:25:17.719031   32156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I0811 23:25:17.719507   32156 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:25:17.719966   32156 main.go:141] libmachine: Using API Version  1
	I0811 23:25:17.719988   32156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:25:17.720350   32156 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:25:17.720543   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:17.720707   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetState
	I0811 23:25:17.722326   32156 fix.go:102] recreateIfNeeded on multinode-618164-m03: state=Stopped err=<nil>
	I0811 23:25:17.724458   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	W0811 23:25:17.724641   32156 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:25:17.726332   32156 out.go:177] * Restarting existing kvm2 VM for "multinode-618164-m03" ...
	I0811 23:25:17.728121   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .Start
	I0811 23:25:17.728331   32156 main.go:141] libmachine: (multinode-618164-m03) Ensuring networks are active...
	I0811 23:25:17.729124   32156 main.go:141] libmachine: (multinode-618164-m03) Ensuring network default is active
	I0811 23:25:17.729469   32156 main.go:141] libmachine: (multinode-618164-m03) Ensuring network mk-multinode-618164 is active
	I0811 23:25:17.729812   32156 main.go:141] libmachine: (multinode-618164-m03) Getting domain xml...
	I0811 23:25:17.730556   32156 main.go:141] libmachine: (multinode-618164-m03) Creating domain...
	I0811 23:25:18.972672   32156 main.go:141] libmachine: (multinode-618164-m03) Waiting to get IP...
	I0811 23:25:18.973569   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:18.973976   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:18.974087   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:18.973983   32576 retry.go:31] will retry after 247.15448ms: waiting for machine to come up
	I0811 23:25:19.222450   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:19.223012   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:19.223045   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:19.222958   32576 retry.go:31] will retry after 320.207163ms: waiting for machine to come up
	I0811 23:25:19.545416   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:19.545806   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:19.545833   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:19.545772   32576 retry.go:31] will retry after 410.907641ms: waiting for machine to come up
	I0811 23:25:19.958311   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:19.958713   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:19.958746   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:19.958619   32576 retry.go:31] will retry after 529.355814ms: waiting for machine to come up
	I0811 23:25:20.489224   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:20.489697   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:20.489739   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:20.489659   32576 retry.go:31] will retry after 530.096222ms: waiting for machine to come up
	I0811 23:25:21.021185   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:21.021706   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:21.021729   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:21.021662   32576 retry.go:31] will retry after 792.292205ms: waiting for machine to come up
	I0811 23:25:21.815693   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:21.816071   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:21.816098   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:21.816019   32576 retry.go:31] will retry after 891.947853ms: waiting for machine to come up
	I0811 23:25:22.709969   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:22.710378   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:22.710404   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:22.710326   32576 retry.go:31] will retry after 1.186793563s: waiting for machine to come up
	I0811 23:25:23.898208   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:23.898777   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:23.898803   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:23.898711   32576 retry.go:31] will retry after 1.371024031s: waiting for machine to come up
	I0811 23:25:25.271009   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:25.271411   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:25.271434   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:25.271373   32576 retry.go:31] will retry after 2.293356428s: waiting for machine to come up
	I0811 23:25:27.566089   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:27.566561   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:27.566589   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:27.566512   32576 retry.go:31] will retry after 2.86191654s: waiting for machine to come up
	I0811 23:25:30.430526   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:30.430948   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:30.430979   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:30.430884   32576 retry.go:31] will retry after 2.696789013s: waiting for machine to come up
	I0811 23:25:33.129055   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:33.129437   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | unable to find current IP address of domain multinode-618164-m03 in network mk-multinode-618164
	I0811 23:25:33.129465   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | I0811 23:25:33.129382   32576 retry.go:31] will retry after 2.912914856s: waiting for machine to come up
	I0811 23:25:36.045333   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.045895   32156 main.go:141] libmachine: (multinode-618164-m03) Found IP for machine: 192.168.39.21
	I0811 23:25:36.045923   32156 main.go:141] libmachine: (multinode-618164-m03) Reserving static IP address...
	I0811 23:25:36.045950   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has current primary IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.046318   32156 main.go:141] libmachine: (multinode-618164-m03) Reserved static IP address: 192.168.39.21
	I0811 23:25:36.046343   32156 main.go:141] libmachine: (multinode-618164-m03) Waiting for SSH to be available...
	I0811 23:25:36.046365   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "multinode-618164-m03", mac: "52:54:00:f9:60:56", ip: "192.168.39.21"} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.046409   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | skip adding static IP to network mk-multinode-618164 - found existing host DHCP lease matching {name: "multinode-618164-m03", mac: "52:54:00:f9:60:56", ip: "192.168.39.21"}
	I0811 23:25:36.046443   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | Getting to WaitForSSH function...
	I0811 23:25:36.048418   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.048737   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.048769   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.048863   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | Using SSH client type: external
	I0811 23:25:36.048913   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa (-rw-------)
	I0811 23:25:36.048946   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0811 23:25:36.048960   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | About to run SSH command:
	I0811 23:25:36.048969   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | exit 0
	I0811 23:25:36.143723   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | SSH cmd err, output: <nil>: 
	I0811 23:25:36.143998   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetConfigRaw
	I0811 23:25:36.144693   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetIP
	I0811 23:25:36.147146   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.147538   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.147572   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.147863   32156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/multinode-618164/config.json ...
	I0811 23:25:36.148048   32156 machine.go:88] provisioning docker machine ...
	I0811 23:25:36.148065   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:36.148332   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetMachineName
	I0811 23:25:36.148485   32156 buildroot.go:166] provisioning hostname "multinode-618164-m03"
	I0811 23:25:36.148504   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetMachineName
	I0811 23:25:36.148718   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.150817   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.151209   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.151241   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.151439   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.151635   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.151808   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.151996   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.152210   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.152601   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.152621   32156 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-618164-m03 && echo "multinode-618164-m03" | sudo tee /etc/hostname
	I0811 23:25:36.293780   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-618164-m03
	
	I0811 23:25:36.293808   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.297049   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.297512   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.297545   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.297709   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.297904   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.298085   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.298222   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.298373   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.298764   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.298787   32156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-618164-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-618164-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-618164-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:25:36.433352   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:25:36.433379   32156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17044-9593/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-9593/.minikube}
	I0811 23:25:36.433400   32156 buildroot.go:174] setting up certificates
	I0811 23:25:36.433409   32156 provision.go:83] configureAuth start
	I0811 23:25:36.433420   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetMachineName
	I0811 23:25:36.433718   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetIP
	I0811 23:25:36.436594   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.436937   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.436971   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.437106   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.439230   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.439550   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.439579   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.439671   32156 provision.go:138] copyHostCerts
	I0811 23:25:36.439709   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:25:36.439748   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem, removing ...
	I0811 23:25:36.439760   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem
	I0811 23:25:36.439831   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/ca.pem (1078 bytes)
	I0811 23:25:36.439904   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:25:36.439921   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem, removing ...
	I0811 23:25:36.439929   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem
	I0811 23:25:36.439952   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/cert.pem (1123 bytes)
	I0811 23:25:36.439993   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:25:36.440008   32156 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem, removing ...
	I0811 23:25:36.440014   32156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem
	I0811 23:25:36.440034   32156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-9593/.minikube/key.pem (1675 bytes)
	I0811 23:25:36.440096   32156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca-key.pem org=jenkins.multinode-618164-m03 san=[192.168.39.21 192.168.39.21 localhost 127.0.0.1 minikube multinode-618164-m03]
	I0811 23:25:36.501259   32156 provision.go:172] copyRemoteCerts
	I0811 23:25:36.501310   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:25:36.501330   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.504009   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.504432   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.504465   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.504639   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.504800   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.504964   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.505060   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:36.596832   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:25:36.596906   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0811 23:25:36.621591   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:25:36.621657   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0811 23:25:36.644337   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:25:36.644396   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:25:36.668335   32156 provision.go:86] duration metric: configureAuth took 234.912237ms
	I0811 23:25:36.668361   32156 buildroot.go:189] setting minikube options for container-runtime
	I0811 23:25:36.668554   32156 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:25:36.668575   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:36.668832   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.671119   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.671514   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.671579   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.671669   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.671923   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.672119   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.672335   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.672567   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.673055   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.673071   32156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 23:25:36.798036   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0811 23:25:36.798062   32156 buildroot.go:70] root file system type: tmpfs
	I0811 23:25:36.798178   32156 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 23:25:36.798200   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.801022   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.801362   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.801391   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.801528   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.801739   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.801915   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.802063   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.802210   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.802566   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.802649   32156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.6"
	Environment="NO_PROXY=192.168.39.6,192.168.39.254"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 23:25:36.940093   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.6
	Environment=NO_PROXY=192.168.39.6,192.168.39.254
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 23:25:36.940130   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:36.943126   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.943512   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:36.943546   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:36.943750   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:36.943935   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.944142   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:36.944307   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:36.944493   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:36.945117   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:36.945149   32156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 23:25:37.838733   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0811 23:25:37.838759   32156 machine.go:91] provisioned docker machine in 1.690697728s
	I0811 23:25:37.838769   32156 start.go:300] post-start starting for "multinode-618164-m03" (driver="kvm2")
	I0811 23:25:37.838778   32156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:25:37.838796   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:37.839181   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:25:37.839216   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:37.841673   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.842079   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:37.842111   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.842251   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:37.842440   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:37.842680   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:37.842835   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:37.937129   32156 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:25:37.941409   32156 command_runner.go:130] > NAME=Buildroot
	I0811 23:25:37.941430   32156 command_runner.go:130] > VERSION=2021.02.12-1-gb58903a-dirty
	I0811 23:25:37.941437   32156 command_runner.go:130] > ID=buildroot
	I0811 23:25:37.941445   32156 command_runner.go:130] > VERSION_ID=2021.02.12
	I0811 23:25:37.941452   32156 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0811 23:25:37.941635   32156 info.go:137] Remote host: Buildroot 2021.02.12
	I0811 23:25:37.941651   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/addons for local assets ...
	I0811 23:25:37.941708   32156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-9593/.minikube/files for local assets ...
	I0811 23:25:37.941797   32156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> 168362.pem in /etc/ssl/certs
	I0811 23:25:37.941809   32156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem -> /etc/ssl/certs/168362.pem
	I0811 23:25:37.941890   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:25:37.951136   32156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/ssl/certs/168362.pem --> /etc/ssl/certs/168362.pem (1708 bytes)
	I0811 23:25:37.972839   32156 start.go:303] post-start completed in 134.057637ms
	I0811 23:25:37.972859   32156 fix.go:56] fixHost completed within 20.268465262s
	I0811 23:25:37.972880   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:37.975862   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.976279   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:37.976308   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:37.976445   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:37.976635   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:37.976789   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:37.976944   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:37.977100   32156 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:37.977480   32156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0811 23:25:37.977491   32156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0811 23:25:38.104188   32156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691796338.052995938
	
	I0811 23:25:38.104213   32156 fix.go:206] guest clock: 1691796338.052995938
	I0811 23:25:38.104238   32156 fix.go:219] Guest: 2023-08-11 23:25:38.052995938 +0000 UTC Remote: 2023-08-11 23:25:37.972862052 +0000 UTC m=+125.283072685 (delta=80.133886ms)
	I0811 23:25:38.104257   32156 fix.go:190] guest clock delta is within tolerance: 80.133886ms
	I0811 23:25:38.104262   32156 start.go:83] releasing machines lock for "multinode-618164-m03", held for 20.399878116s
	I0811 23:25:38.104279   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.104576   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetIP
	I0811 23:25:38.107197   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.107628   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:38.107650   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.109952   32156 out.go:177] * Found network options:
	I0811 23:25:38.111798   32156 out.go:177]   - NO_PROXY=192.168.39.6,192.168.39.254
	W0811 23:25:38.113479   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	W0811 23:25:38.113500   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:25:38.113513   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.114070   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.114262   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .DriverName
	I0811 23:25:38.114348   32156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:25:38.114385   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	W0811 23:25:38.114478   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	W0811 23:25:38.114501   32156 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:25:38.114558   32156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:25:38.114573   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHHostname
	I0811 23:25:38.117304   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.117690   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:38.117719   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.117744   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.117866   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:38.118061   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:38.118233   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:38.118233   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:56", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:22:44 +0000 UTC Type:0 Mac:52:54:00:f9:60:56 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-618164-m03 Clientid:01:52:54:00:f9:60:56}
	I0811 23:25:38.118300   32156 main.go:141] libmachine: (multinode-618164-m03) DBG | domain multinode-618164-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:f9:60:56 in network mk-multinode-618164
	I0811 23:25:38.118395   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHPort
	I0811 23:25:38.118408   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:38.118555   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHKeyPath
	I0811 23:25:38.118692   32156 main.go:141] libmachine: (multinode-618164-m03) Calling .GetSSHUsername
	I0811 23:25:38.118819   32156 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m03/id_rsa Username:docker}
	I0811 23:25:38.214388   32156 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0811 23:25:38.214435   32156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0811 23:25:38.214539   32156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:25:38.239706   32156 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:25:38.240636   32156 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0811 23:25:38.240659   32156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:25:38.240668   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:25:38.240779   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:25:38.258654   32156 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0811 23:25:38.258755   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0811 23:25:38.268907   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0811 23:25:38.279426   32156 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0811 23:25:38.279494   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0811 23:25:38.289572   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:25:38.299314   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0811 23:25:38.309114   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0811 23:25:38.318624   32156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:25:38.328572   32156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0811 23:25:38.338237   32156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:25:38.346331   32156 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:25:38.346394   32156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:25:38.354327   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:38.457471   32156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0811 23:25:38.476104   32156 start.go:466] detecting cgroup driver to use...
	I0811 23:25:38.476184   32156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0811 23:25:38.495179   32156 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0811 23:25:38.495561   32156 command_runner.go:130] > [Unit]
	I0811 23:25:38.495584   32156 command_runner.go:130] > Description=Docker Application Container Engine
	I0811 23:25:38.495593   32156 command_runner.go:130] > Documentation=https://docs.docker.com
	I0811 23:25:38.495602   32156 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0811 23:25:38.495610   32156 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0811 23:25:38.495621   32156 command_runner.go:130] > StartLimitBurst=3
	I0811 23:25:38.495630   32156 command_runner.go:130] > StartLimitIntervalSec=60
	I0811 23:25:38.495636   32156 command_runner.go:130] > [Service]
	I0811 23:25:38.495646   32156 command_runner.go:130] > Type=notify
	I0811 23:25:38.495652   32156 command_runner.go:130] > Restart=on-failure
	I0811 23:25:38.495659   32156 command_runner.go:130] > Environment=NO_PROXY=192.168.39.6
	I0811 23:25:38.495676   32156 command_runner.go:130] > Environment=NO_PROXY=192.168.39.6,192.168.39.254
	I0811 23:25:38.495692   32156 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 23:25:38.495709   32156 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 23:25:38.495737   32156 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 23:25:38.495751   32156 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0811 23:25:38.495765   32156 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 23:25:38.495779   32156 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 23:25:38.495833   32156 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 23:25:38.495852   32156 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 23:25:38.495866   32156 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 23:25:38.495876   32156 command_runner.go:130] > ExecStart=
	I0811 23:25:38.495903   32156 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0811 23:25:38.495916   32156 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 23:25:38.495930   32156 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 23:25:38.495944   32156 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 23:25:38.495953   32156 command_runner.go:130] > LimitNOFILE=infinity
	I0811 23:25:38.495960   32156 command_runner.go:130] > LimitNPROC=infinity
	I0811 23:25:38.495969   32156 command_runner.go:130] > LimitCORE=infinity
	I0811 23:25:38.495978   32156 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0811 23:25:38.495989   32156 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0811 23:25:38.495999   32156 command_runner.go:130] > TasksMax=infinity
	I0811 23:25:38.496005   32156 command_runner.go:130] > TimeoutStartSec=0
	I0811 23:25:38.496018   32156 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 23:25:38.496026   32156 command_runner.go:130] > Delegate=yes
	I0811 23:25:38.496046   32156 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0811 23:25:38.496055   32156 command_runner.go:130] > KillMode=process
	I0811 23:25:38.496061   32156 command_runner.go:130] > [Install]
	I0811 23:25:38.496069   32156 command_runner.go:130] > WantedBy=multi-user.target
	I0811 23:25:38.496303   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:25:38.514306   32156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:25:38.534347   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:25:38.546429   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:25:38.557721   32156 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0811 23:25:38.591792   32156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0811 23:25:38.605657   32156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:25:38.624649   32156 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0811 23:25:38.625095   32156 ssh_runner.go:195] Run: which cri-dockerd
	I0811 23:25:38.628675   32156 command_runner.go:130] > /usr/bin/cri-dockerd
	I0811 23:25:38.628776   32156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0811 23:25:38.637446   32156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0811 23:25:38.655221   32156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0811 23:25:38.757647   32156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0811 23:25:38.866252   32156 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0811 23:25:38.866289   32156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0811 23:25:38.883536   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:39.000609   32156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0811 23:25:40.459788   32156 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.45914496s)
	I0811 23:25:40.459842   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:40.571329   32156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0811 23:25:40.695944   32156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0811 23:25:40.813305   32156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:40.926702   32156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0811 23:25:40.942637   32156 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0811 23:25:40.945410   32156 out.go:177] 
	W0811 23:25:40.946925   32156 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0811 23:25:40.946942   32156 out.go:239] * 
	W0811 23:25:40.947740   32156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 23:25:40.949375   32156 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-08-11 23:23:44 UTC, ends at Fri 2023-08-11 23:25:42 UTC. --
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.772404979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.772553073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.772581762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.772592874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.773642508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.773755142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 11 23:24:31 multinode-618164 dockerd[839]: time="2023-08-11T23:24:31.773769896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:32 multinode-618164 cri-dockerd[1114]: time="2023-08-11T23:24:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6cd8ddb90c2724d6b3c3c3966e55acfd7deda5ad0a411597b24617b116ee0b6f/resolv.conf as [nameserver 192.168.122.1]"
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.355729968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.363672866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.363821026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.363955195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:32 multinode-618164 cri-dockerd[1114]: time="2023-08-11T23:24:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/52c5ac88204d49bba7a91c159de18df4e4d7abed122de232dd7fc67cddb69496/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.723939681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.724004152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.724027439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 11 23:24:32 multinode-618164 dockerd[839]: time="2023-08-11T23:24:32.724041501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:24:47 multinode-618164 dockerd[833]: time="2023-08-11T23:24:47.785692505Z" level=info msg="ignoring event" container=692c8a63b43f3fcd00364faddd11652aee717c5728f59c65a4669cf123df58ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 23:24:47 multinode-618164 dockerd[839]: time="2023-08-11T23:24:47.786671216Z" level=info msg="shim disconnected" id=692c8a63b43f3fcd00364faddd11652aee717c5728f59c65a4669cf123df58ef namespace=moby
	Aug 11 23:24:47 multinode-618164 dockerd[839]: time="2023-08-11T23:24:47.786777244Z" level=warning msg="cleaning up after shim disconnected" id=692c8a63b43f3fcd00364faddd11652aee717c5728f59c65a4669cf123df58ef namespace=moby
	Aug 11 23:24:47 multinode-618164 dockerd[839]: time="2023-08-11T23:24:47.786790171Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 11 23:25:02 multinode-618164 dockerd[839]: time="2023-08-11T23:25:02.720308285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 11 23:25:02 multinode-618164 dockerd[839]: time="2023-08-11T23:25:02.720404305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 11 23:25:02 multinode-618164 dockerd[839]: time="2023-08-11T23:25:02.720436579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 11 23:25:02 multinode-618164 dockerd[839]: time="2023-08-11T23:25:02.720447633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	4cab6ac338f6a       6e38f40d628db                                                                                         40 seconds ago       Running             storage-provisioner       2                   5a0ecbf5e9652
	38699d5198a52       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   52c5ac88204d4
	034d62264774d       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   6cd8ddb90c272
	e692164b748df       b0b1fa0f58c6e                                                                                         About a minute ago   Running             kindnet-cni               1                   5982ccf26a169
	692c8a63b43f3       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5a0ecbf5e9652
	da196d76236ba       6848d7eda0341                                                                                         About a minute ago   Running             kube-proxy                1                   dae95b7f96a3a
	ecb627dc19ed9       86b6af7dd652c                                                                                         About a minute ago   Running             etcd                      1                   e55e958885d89
	62a61bbe0b1e5       98ef2570f3cde                                                                                         About a minute ago   Running             kube-scheduler            1                   b687ecd76cfac
	0c4168e6eaf9f       f466468864b7a                                                                                         About a minute ago   Running             kube-controller-manager   1                   7326a3fb57907
	737c75301858a       e7972205b6614                                                                                         About a minute ago   Running             kube-apiserver            1                   b56d36e0d1a13
	93e34e0c73f7e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   2011dfae21046
	e5175209bd61f       ead0a4a53df89                                                                                         5 minutes ago        Exited              coredns                   0                   5b35741c12db6
	feef63247dc8c       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974              5 minutes ago        Exited              kindnet-cni               0                   53769ace7d8fd
	c0158a6605ea7       6848d7eda0341                                                                                         5 minutes ago        Exited              kube-proxy                0                   c453bb965128e
	ef74cd56c60d4       98ef2570f3cde                                                                                         5 minutes ago        Exited              kube-scheduler            0                   e102c9cb8b461
	a3429cc90df2a       86b6af7dd652c                                                                                         5 minutes ago        Exited              etcd                      0                   5db82ba10c90b
	2965fda37c078       e7972205b6614                                                                                         5 minutes ago        Exited              kube-apiserver            0                   609eb0503045a
	5f9d39ea2d1fd       f466468864b7a                                                                                         5 minutes ago        Exited              kube-controller-manager   0                   208f3b4c3f22e
	
	* 
	* ==> coredns [034d62264774] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45131 - 31434 "HINFO IN 23808321834339079.2096763463489472082. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.02422327s
	
	* 
	* ==> coredns [e5175209bd61] <==
	* [INFO] 10.244.1.2:46585 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001928777s
	[INFO] 10.244.1.2:34793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203647s
	[INFO] 10.244.1.2:49759 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123136s
	[INFO] 10.244.1.2:55430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001300079s
	[INFO] 10.244.1.2:32975 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106301s
	[INFO] 10.244.1.2:53199 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075779s
	[INFO] 10.244.1.2:54447 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077165s
	[INFO] 10.244.0.3:53247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097055s
	[INFO] 10.244.0.3:46640 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112932s
	[INFO] 10.244.0.3:41385 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042223s
	[INFO] 10.244.0.3:41005 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043985s
	[INFO] 10.244.1.2:55648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140205s
	[INFO] 10.244.1.2:34706 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200073s
	[INFO] 10.244.1.2:36558 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103238s
	[INFO] 10.244.1.2:36100 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075633s
	[INFO] 10.244.0.3:59317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122522s
	[INFO] 10.244.0.3:54723 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224288s
	[INFO] 10.244.0.3:54317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173489s
	[INFO] 10.244.0.3:57480 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251637s
	[INFO] 10.244.1.2:59963 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123467s
	[INFO] 10.244.1.2:49371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00045399s
	[INFO] 10.244.1.2:33378 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134968s
	[INFO] 10.244.1.2:55296 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000585633s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-618164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-618164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5
	                    minikube.k8s.io/name=multinode-618164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_11T23_20_16_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:20:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-618164
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:25:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:24:25 +0000   Fri, 11 Aug 2023 23:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:24:25 +0000   Fri, 11 Aug 2023 23:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:24:25 +0000   Fri, 11 Aug 2023 23:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:24:25 +0000   Fri, 11 Aug 2023 23:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    multinode-618164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d217c64470ae4e30a75a23638452cf8c
	  System UUID:                d217c644-70ae-4e30-a75a-23638452cf8c
	  Boot ID:                    4f707916-0254-4568-a468-cf251325f8aa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-dspxl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 coredns-5d78c9869d-zrmf9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m15s
	  kube-system                 etcd-multinode-618164                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m27s
	  kube-system                 kindnet-szdxp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m15s
	  kube-system                 kube-apiserver-multinode-618164             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-controller-manager-multinode-618164    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-proxy-glw45                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-scheduler-multinode-618164             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  Starting                 85s                    kube-proxy       
	  Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node multinode-618164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node multinode-618164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node multinode-618164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m27s                  kubelet          Node multinode-618164 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m27s                  kubelet          Node multinode-618164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s                  kubelet          Node multinode-618164 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m16s                  node-controller  Node multinode-618164 event: Registered Node multinode-618164 in Controller
	  Normal  NodeReady                5m3s                   kubelet          Node multinode-618164 status is now: NodeReady
	  Normal  Starting                 94s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s (x8 over 94s)      kubelet          Node multinode-618164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x8 over 94s)      kubelet          Node multinode-618164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x7 over 94s)      kubelet          Node multinode-618164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           75s                    node-controller  Node multinode-618164 event: Registered Node multinode-618164 in Controller
	
	
	Name:               multinode-618164-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-618164-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-618164-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:25:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:25:15 +0000   Fri, 11 Aug 2023 23:25:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:25:15 +0000   Fri, 11 Aug 2023 23:25:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:25:15 +0000   Fri, 11 Aug 2023 23:25:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:25:15 +0000   Fri, 11 Aug 2023 23:25:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.254
	  Hostname:    multinode-618164-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1753f3a07a344f528a32520fd39c0c9c
	  System UUID:                1753f3a0-7a34-4f52-8a32-520fd39c0c9c
	  Boot ID:                    43147f1a-599d-4601-922f-8118ffbe5023
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-m2c5t       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m27s
	  kube-system                 kube-proxy-9ldtq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m20s                  kube-proxy  
	  Normal  Starting                 35s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  4m27s (x5 over 4m28s)  kubelet     Node multinode-618164-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x5 over 4m28s)  kubelet     Node multinode-618164-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x5 over 4m28s)  kubelet     Node multinode-618164-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m16s                  kubelet     Node multinode-618164-m02 status is now: NodeReady
	  Normal  Starting                 37s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x2 over 37s)      kubelet     Node multinode-618164-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x2 over 37s)      kubelet     Node multinode-618164-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x2 over 37s)      kubelet     Node multinode-618164-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                27s                    kubelet     Node multinode-618164-m02 status is now: NodeReady
	
	
	Name:               multinode-618164-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-618164-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:22:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-618164-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:23:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 11 Aug 2023 23:23:01 +0000   Fri, 11 Aug 2023 23:25:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 11 Aug 2023 23:23:01 +0000   Fri, 11 Aug 2023 23:25:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 11 Aug 2023 23:23:01 +0000   Fri, 11 Aug 2023 23:25:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 11 Aug 2023 23:23:01 +0000   Fri, 11 Aug 2023 23:25:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    multinode-618164-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e96d66eaf3af4cd1a52173e0b337687f
	  System UUID:                e96d66ea-f3af-4cd1-a521-73e0b337687f
	  Boot ID:                    b727d8ac-fc44-418e-8a0c-bdc657dca17a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-hx8zk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kindnet-clfqj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m35s
	  kube-system                 kube-proxy-pv5p5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m29s                  kube-proxy       
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m35s (x5 over 3m37s)  kubelet          Node multinode-618164-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s (x5 over 3m37s)  kubelet          Node multinode-618164-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s (x5 over 3m37s)  kubelet          Node multinode-618164-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m24s                  kubelet          Node multinode-618164-m03 status is now: NodeReady
	  Normal  Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m49s (x2 over 2m50s)  kubelet          Node multinode-618164-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s (x2 over 2m50s)  kubelet          Node multinode-618164-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s (x2 over 2m50s)  kubelet          Node multinode-618164-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m41s                  kubelet          Node multinode-618164-m03 status is now: NodeReady
	  Normal  RegisteredNode           75s                    node-controller  Node multinode-618164-m03 event: Registered Node multinode-618164-m03 in Controller
	  Normal  NodeNotReady             35s                    node-controller  Node multinode-618164-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [Aug11 23:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070740] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.309033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.442735] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141290] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.460930] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.662869] systemd-fstab-generator[513]: Ignoring "noauto" for root device
	[  +0.113934] systemd-fstab-generator[530]: Ignoring "noauto" for root device
	[  +1.126749] systemd-fstab-generator[761]: Ignoring "noauto" for root device
	[  +0.280345] systemd-fstab-generator[798]: Ignoring "noauto" for root device
	[  +0.110879] systemd-fstab-generator[809]: Ignoring "noauto" for root device
	[  +0.120508] systemd-fstab-generator[822]: Ignoring "noauto" for root device
	[  +1.573878] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.106859] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
	[  +0.109484] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.100215] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +0.138586] systemd-fstab-generator[1081]: Ignoring "noauto" for root device
	[Aug11 23:24] systemd-fstab-generator[1350]: Ignoring "noauto" for root device
	[  +0.432470] kauditd_printk_skb: 67 callbacks suppressed
	[ +19.350291] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [a3429cc90df2] <==
	* {"level":"info","ts":"2023-08-11T23:20:09.495Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2023-08-11T23:20:10.099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-11T23:20:10.099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-11T23:20:10.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 1"}
	{"level":"info","ts":"2023-08-11T23:20:10.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 2"}
	{"level":"info","ts":"2023-08-11T23:20:10.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2023-08-11T23:20:10.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 2"}
	{"level":"info","ts":"2023-08-11T23:20:10.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2023-08-11T23:20:10.101Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:multinode-618164 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-11T23:20:10.102Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:20:10.102Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:20:10.103Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-11T23:20:10.104Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:20:10.104Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2023-08-11T23:20:10.105Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-11T23:20:10.105Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-11T23:20:10.113Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:20:10.117Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:20:10.117Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:23:04.551Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-11T23:23:04.551Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-618164","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	{"level":"info","ts":"2023-08-11T23:23:04.574Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6f26d2d338759d80","current-leader-member-id":"6f26d2d338759d80"}
	{"level":"info","ts":"2023-08-11T23:23:04.578Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2023-08-11T23:23:04.580Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2023-08-11T23:23:04.580Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-618164","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	* 
	* ==> etcd [ecb627dc19ed] <==
	* {"level":"info","ts":"2023-08-11T23:24:12.229Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:24:12.230Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:24:12.230Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:24:12.231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 switched to configuration voters=(8009320791952170368)"}
	{"level":"info","ts":"2023-08-11T23:24:12.231Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","added-peer-id":"6f26d2d338759d80","added-peer-peer-urls":["https://192.168.39.6:2380"]}
	{"level":"info","ts":"2023-08-11T23:24:12.232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:24:12.232Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:24:12.233Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"6f26d2d338759d80","initial-advertise-peer-urls":["https://192.168.39.6:2380"],"listen-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-11T23:24:12.233Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-11T23:24:12.234Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2023-08-11T23:24:12.234Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2023-08-11T23:24:13.305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:13.305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:13.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:13.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 3"}
	{"level":"info","ts":"2023-08-11T23:24:13.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2023-08-11T23:24:13.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 3"}
	{"level":"info","ts":"2023-08-11T23:24:13.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2023-08-11T23:24:13.308Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:multinode-618164 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-11T23:24:13.308Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:24:13.308Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:24:13.309Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2023-08-11T23:24:13.318Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-11T23:24:13.311Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-11T23:24:13.318Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:25:42 up 2 min,  0 users,  load average: 0.59, 0.37, 0.14
	Linux multinode-618164 5.10.57 #1 SMP Tue Aug 1 02:07:57 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [e692164b748d] <==
	* I0811 23:25:01.043662       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.3.0/24] 
	I0811 23:25:11.053921       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:25:11.054133       1 main.go:227] handling current node
	I0811 23:25:11.054236       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:25:11.054285       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:25:11.054633       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:25:11.054721       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.3.0/24] 
	I0811 23:25:21.068704       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:25:21.068831       1 main.go:227] handling current node
	I0811 23:25:21.068857       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:25:21.068975       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:25:21.069175       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:25:21.069254       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.3.0/24] 
	I0811 23:25:31.076107       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:25:31.076302       1 main.go:227] handling current node
	I0811 23:25:31.076531       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:25:31.076830       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:25:31.077229       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:25:31.077313       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.3.0/24] 
	I0811 23:25:41.085116       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:25:41.085163       1 main.go:227] handling current node
	I0811 23:25:41.085175       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:25:41.085181       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:25:41.085347       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:25:41.085384       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [feef63247dc8] <==
	* I0811 23:22:25.102734       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:22:25.102791       1 main.go:227] handling current node
	I0811 23:22:25.102803       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:22:25.102809       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:22:25.103191       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:22:25.103228       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.2.0/24] 
	I0811 23:22:35.110113       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:22:35.110502       1 main.go:227] handling current node
	I0811 23:22:35.110716       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:22:35.110895       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:22:35.111215       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:22:35.111453       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.2.0/24] 
	I0811 23:22:45.120640       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:22:45.120862       1 main.go:227] handling current node
	I0811 23:22:45.120921       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:22:45.120953       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:22:45.121179       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:22:45.121339       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.2.0/24] 
	I0811 23:22:55.136512       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0811 23:22:55.136535       1 main.go:227] handling current node
	I0811 23:22:55.136565       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0811 23:22:55.136569       1 main.go:250] Node multinode-618164-m02 has CIDR [10.244.1.0/24] 
	I0811 23:22:55.136741       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I0811 23:22:55.136747       1 main.go:250] Node multinode-618164-m03 has CIDR [10.244.3.0/24] 
	I0811 23:22:55.136808       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.21 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [2965fda37c07] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0811 23:23:14.495615       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0811 23:23:14.528614       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0811 23:23:14.548440       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [737c75301858] <==
	* I0811 23:24:15.119559       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0811 23:24:15.119658       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0811 23:24:15.119784       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0811 23:24:15.137124       1 aggregator.go:152] initial CRD sync complete...
	I0811 23:24:15.137239       1 autoregister_controller.go:141] Starting autoregister controller
	I0811 23:24:15.137260       1 cache.go:32] Waiting for caches to sync for autoregister controller
	E0811 23:24:15.156784       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0811 23:24:15.160892       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0811 23:24:15.168343       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0811 23:24:15.168537       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0811 23:24:15.169041       1 shared_informer.go:318] Caches are synced for configmaps
	I0811 23:24:15.169066       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0811 23:24:15.169357       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0811 23:24:15.175227       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0811 23:24:15.188451       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0811 23:24:15.237541       1 cache.go:39] Caches are synced for autoregister controller
	I0811 23:24:15.726723       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0811 23:24:16.073322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0811 23:24:17.978037       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0811 23:24:18.104953       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0811 23:24:18.116289       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0811 23:24:18.183304       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0811 23:24:18.191151       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0811 23:24:27.878983       1 controller.go:624] quota admission added evaluator for: endpoints
	I0811 23:24:27.899763       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [0c4168e6eaf9] <==
	* I0811 23:24:27.864687       1 event.go:307] "Event occurred" object="multinode-618164-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-618164-m02 event: Registered Node multinode-618164-m02 in Controller"
	I0811 23:24:27.864821       1 event.go:307] "Event occurred" object="multinode-618164-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-618164-m03 event: Registered Node multinode-618164-m03 in Controller"
	I0811 23:24:27.864904       1 event.go:307] "Event occurred" object="multinode-618164" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-618164 event: Registered Node multinode-618164 in Controller"
	I0811 23:24:27.866216       1 shared_informer.go:318] Caches are synced for endpoint
	I0811 23:24:27.874709       1 shared_informer.go:318] Caches are synced for daemon sets
	I0811 23:24:27.884177       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0811 23:24:27.884230       1 shared_informer.go:318] Caches are synced for PVC protection
	I0811 23:24:27.885922       1 shared_informer.go:318] Caches are synced for job
	I0811 23:24:27.890893       1 shared_informer.go:318] Caches are synced for deployment
	I0811 23:24:27.892320       1 shared_informer.go:318] Caches are synced for GC
	I0811 23:24:27.901708       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:24:27.902533       1 shared_informer.go:318] Caches are synced for HPA
	I0811 23:24:28.239987       1 shared_informer.go:318] Caches are synced for garbage collector
	I0811 23:24:28.270118       1 shared_informer.go:318] Caches are synced for garbage collector
	I0811 23:24:28.270239       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0811 23:25:01.501345       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-hx8zk"
	W0811 23:25:04.509085       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m03 node
	I0811 23:25:05.359546       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-618164-m02\" does not exist"
	W0811 23:25:05.360136       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m03 node
	I0811 23:25:05.366817       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-618164-m02" podCIDRs=[10.244.1.0/24]
	I0811 23:25:07.881866       1 event.go:307] "Event occurred" object="multinode-618164-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-618164-m03 status is now: NodeNotReady"
	I0811 23:25:07.900652       1 event.go:307] "Event occurred" object="kube-system/kindnet-clfqj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0811 23:25:07.917057       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-pv5p5" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0811 23:25:15.637807       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	I0811 23:25:17.931029       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-vrdpw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-vrdpw"
	
	* 
	* ==> kube-controller-manager [5f9d39ea2d1f] <==
	* I0811 23:20:42.000184       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d-zrmf9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5d78c9869d-zrmf9"
	I0811 23:20:42.000522       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0811 23:21:15.978324       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-618164-m02\" does not exist"
	I0811 23:21:15.994696       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-618164-m02" podCIDRs=[10.244.1.0/24]
	I0811 23:21:16.009953       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9ldtq"
	I0811 23:21:16.019072       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-m2c5t"
	I0811 23:21:17.005410       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-618164-m02"
	I0811 23:21:17.005762       1 event.go:307] "Event occurred" object="multinode-618164-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-618164-m02 event: Registered Node multinode-618164-m02 in Controller"
	W0811 23:21:26.294065       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	I0811 23:21:28.971205       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0811 23:21:28.996941       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-vrdpw"
	I0811 23:21:29.014125       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-dspxl"
	W0811 23:22:07.298664       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	I0811 23:22:07.299007       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-618164-m03\" does not exist"
	I0811 23:22:07.323698       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-clfqj"
	I0811 23:22:07.332567       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pv5p5"
	I0811 23:22:07.362043       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-618164-m03" podCIDRs=[10.244.2.0/24]
	I0811 23:22:12.028524       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-618164-m03"
	I0811 23:22:12.028658       1 event.go:307] "Event occurred" object="multinode-618164-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-618164-m03 event: Registered Node multinode-618164-m03 in Controller"
	W0811 23:22:18.059179       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	W0811 23:22:52.232850       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	I0811 23:22:53.077759       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-618164-m03\" does not exist"
	W0811 23:22:53.077829       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	I0811 23:22:53.091667       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-618164-m03" podCIDRs=[10.244.3.0/24]
	W0811 23:23:01.414601       1 topologycache.go:232] Can't get CPU or zone information for multinode-618164-m02 node
	
	* 
	* ==> kube-proxy [c0158a6605ea] <==
	* I0811 23:20:28.634724       1 node.go:141] Successfully retrieved node IP: 192.168.39.6
	I0811 23:20:28.634815       1 server_others.go:110] "Detected node IP" address="192.168.39.6"
	I0811 23:20:28.634830       1 server_others.go:554] "Using iptables proxy"
	I0811 23:20:28.741334       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0811 23:20:28.741352       1 server_others.go:192] "Using iptables Proxier"
	I0811 23:20:28.741382       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0811 23:20:28.741820       1 server.go:658] "Version info" version="v1.27.4"
	I0811 23:20:28.741830       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:20:28.742558       1 config.go:188] "Starting service config controller"
	I0811 23:20:28.742578       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0811 23:20:28.742597       1 config.go:97] "Starting endpoint slice config controller"
	I0811 23:20:28.742600       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0811 23:20:28.745071       1 config.go:315] "Starting node config controller"
	I0811 23:20:28.745081       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0811 23:20:28.842957       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0811 23:20:28.843019       1 shared_informer.go:318] Caches are synced for service config
	I0811 23:20:28.845934       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [da196d76236b] <==
	* I0811 23:24:16.759305       1 node.go:141] Successfully retrieved node IP: 192.168.39.6
	I0811 23:24:16.759409       1 server_others.go:110] "Detected node IP" address="192.168.39.6"
	I0811 23:24:16.759766       1 server_others.go:554] "Using iptables proxy"
	I0811 23:24:17.031799       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0811 23:24:17.031849       1 server_others.go:192] "Using iptables Proxier"
	I0811 23:24:17.032405       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0811 23:24:17.033659       1 server.go:658] "Version info" version="v1.27.4"
	I0811 23:24:17.033671       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:24:17.035433       1 config.go:188] "Starting service config controller"
	I0811 23:24:17.035981       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0811 23:24:17.036009       1 config.go:315] "Starting node config controller"
	I0811 23:24:17.036013       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0811 23:24:17.036803       1 config.go:97] "Starting endpoint slice config controller"
	I0811 23:24:17.036812       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0811 23:24:17.142309       1 shared_informer.go:318] Caches are synced for node config
	I0811 23:24:17.142454       1 shared_informer.go:318] Caches are synced for service config
	I0811 23:24:17.147656       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [62a61bbe0b1e] <==
	* I0811 23:24:13.272913       1 serving.go:348] Generated self-signed cert in-memory
	W0811 23:24:15.152255       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0811 23:24:15.152588       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0811 23:24:15.152706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 23:24:15.152861       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 23:24:15.188099       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0811 23:24:15.188228       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:24:15.190404       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0811 23:24:15.190787       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 23:24:15.192203       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0811 23:24:15.192384       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0811 23:24:15.291541       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ef74cd56c60d] <==
	* W0811 23:20:11.949336       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 23:20:11.949376       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0811 23:20:12.752668       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 23:20:12.752694       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0811 23:20:12.840797       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 23:20:12.840985       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0811 23:20:12.898581       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 23:20:12.898634       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0811 23:20:12.941731       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 23:20:12.941794       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0811 23:20:13.124953       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 23:20:13.125186       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0811 23:20:13.201003       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 23:20:13.201029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0811 23:20:13.203001       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 23:20:13.203022       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0811 23:20:13.238509       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 23:20:13.238876       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0811 23:20:13.248479       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0811 23:20:13.248696       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0811 23:20:13.318882       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 23:20:13.318913       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0811 23:20:14.528207       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 23:23:04.694365       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0811 23:23:04.694485       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-08-11 23:23:44 UTC, ends at Fri 2023-08-11 23:25:43 UTC. --
	Aug 11 23:24:19 multinode-618164 kubelet[1356]: E0811 23:24:19.328913    1356 projected.go:198] Error preparing data for projected volume kube-api-access-6x4rn for pod default/busybox-67b7f59bb-dspxl: object "default"/"kube-root-ca.crt" not registered
	Aug 11 23:24:19 multinode-618164 kubelet[1356]: E0811 23:24:19.328965    1356 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/864020ad-35b4-4910-a769-ccdebd9c3758-kube-api-access-6x4rn podName:864020ad-35b4-4910-a769-ccdebd9c3758 nodeName:}" failed. No retries permitted until 2023-08-11 23:24:23.328951335 +0000 UTC m=+15.103693262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6x4rn" (UniqueName: "kubernetes.io/projected/864020ad-35b4-4910-a769-ccdebd9c3758-kube-api-access-6x4rn") pod "busybox-67b7f59bb-dspxl" (UID: "864020ad-35b4-4910-a769-ccdebd9c3758") : object "default"/"kube-root-ca.crt" not registered
	Aug 11 23:24:19 multinode-618164 kubelet[1356]: I0811 23:24:19.826250    1356 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5982ccf26a169dbcbab953bd55369a58b78e4fc29996c8285f0bd331b30aa147"
	Aug 11 23:24:19 multinode-618164 kubelet[1356]: I0811 23:24:19.852803    1356 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0ecbf5e96523350d12ebf7c7931c2117660ce0a85aaec452f67b6d16f916b8"
	Aug 11 23:24:19 multinode-618164 kubelet[1356]: E0811 23:24:19.855676    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-dspxl" podUID=864020ad-35b4-4910-a769-ccdebd9c3758
	Aug 11 23:24:19 multinode-618164 kubelet[1356]: E0811 23:24:19.855981    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-zrmf9" podUID=c3c83ae1-ae12-4872-9c78-4aff9f1cefe4
	Aug 11 23:24:21 multinode-618164 kubelet[1356]: E0811 23:24:21.610050    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-dspxl" podUID=864020ad-35b4-4910-a769-ccdebd9c3758
	Aug 11 23:24:21 multinode-618164 kubelet[1356]: E0811 23:24:21.610233    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-zrmf9" podUID=c3c83ae1-ae12-4872-9c78-4aff9f1cefe4
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.256359    1356 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.256518    1356 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3c83ae1-ae12-4872-9c78-4aff9f1cefe4-config-volume podName:c3c83ae1-ae12-4872-9c78-4aff9f1cefe4 nodeName:}" failed. No retries permitted until 2023-08-11 23:24:31.256443597 +0000 UTC m=+23.031185524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3c83ae1-ae12-4872-9c78-4aff9f1cefe4-config-volume") pod "coredns-5d78c9869d-zrmf9" (UID: "c3c83ae1-ae12-4872-9c78-4aff9f1cefe4") : object "kube-system"/"coredns" not registered
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.357115    1356 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.357272    1356 projected.go:198] Error preparing data for projected volume kube-api-access-6x4rn for pod default/busybox-67b7f59bb-dspxl: object "default"/"kube-root-ca.crt" not registered
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.357595    1356 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/864020ad-35b4-4910-a769-ccdebd9c3758-kube-api-access-6x4rn podName:864020ad-35b4-4910-a769-ccdebd9c3758 nodeName:}" failed. No retries permitted until 2023-08-11 23:24:31.357574615 +0000 UTC m=+23.132316533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6x4rn" (UniqueName: "kubernetes.io/projected/864020ad-35b4-4910-a769-ccdebd9c3758-kube-api-access-6x4rn") pod "busybox-67b7f59bb-dspxl" (UID: "864020ad-35b4-4910-a769-ccdebd9c3758") : object "default"/"kube-root-ca.crt" not registered
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.609202    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-dspxl" podUID=864020ad-35b4-4910-a769-ccdebd9c3758
	Aug 11 23:24:23 multinode-618164 kubelet[1356]: E0811 23:24:23.609601    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-zrmf9" podUID=c3c83ae1-ae12-4872-9c78-4aff9f1cefe4
	Aug 11 23:24:32 multinode-618164 kubelet[1356]: I0811 23:24:32.542009    1356 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c5ac88204d49bba7a91c159de18df4e4d7abed122de232dd7fc67cddb69496"
	Aug 11 23:24:32 multinode-618164 kubelet[1356]: I0811 23:24:32.588613    1356 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cd8ddb90c2724d6b3c3c3966e55acfd7deda5ad0a411597b24617b116ee0b6f"
	Aug 11 23:24:47 multinode-618164 kubelet[1356]: I0811 23:24:47.862339    1356 scope.go:115] "RemoveContainer" containerID="5bb51d1cc942aa47ee1ed35b9303aa88a19c2eda1da4cdadaabea7870a644862"
	Aug 11 23:24:47 multinode-618164 kubelet[1356]: I0811 23:24:47.863392    1356 scope.go:115] "RemoveContainer" containerID="692c8a63b43f3fcd00364faddd11652aee717c5728f59c65a4669cf123df58ef"
	Aug 11 23:24:47 multinode-618164 kubelet[1356]: E0811 23:24:47.866389    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(84ba55f6-4725-46ae-810f-130cbb82dd7f)\"" pod="kube-system/storage-provisioner" podUID=84ba55f6-4725-46ae-810f-130cbb82dd7f
	Aug 11 23:25:02 multinode-618164 kubelet[1356]: I0811 23:25:02.610112    1356 scope.go:115] "RemoveContainer" containerID="692c8a63b43f3fcd00364faddd11652aee717c5728f59c65a4669cf123df58ef"
	Aug 11 23:25:08 multinode-618164 kubelet[1356]: E0811 23:25:08.631602    1356 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 11 23:25:08 multinode-618164 kubelet[1356]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 11 23:25:08 multinode-618164 kubelet[1356]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 11 23:25:08 multinode-618164 kubelet[1356]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-618164 -n multinode-618164
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-618164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-67b7f59bb-hx8zk
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-618164 describe pod busybox-67b7f59bb-hx8zk
helpers_test.go:282: (dbg) kubectl --context multinode-618164 describe pod busybox-67b7f59bb-hx8zk:

                                                
                                                
-- stdout --
	Name:             busybox-67b7f59bb-hx8zk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-618164-m03/
	Labels:           app=busybox
	                  pod-template-hash=67b7f59bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-67b7f59bb
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tnxsb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-tnxsb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  42s   default-scheduler  Successfully assigned default/busybox-67b7f59bb-hx8zk to multinode-618164-m03

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (159.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (139.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2m10.811958081s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 profile list --output=json: signal: killed (7.968019734s)
no_kubernetes_test.go:181: Profile list --output=json failed : "out/minikube-linux-amd64 profile list --output=json" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-838892 -n NoKubernetes-838892
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-838892 -n NoKubernetes-838892: exit status 6 (250.909965ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 23:44:38.251612   44460 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-838892" does not appear in /home/jenkins/minikube-integration/17044-9593/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-838892" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/ProfileList (139.03s)

                                                
                                    

Test pass (284/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.42
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.4/json-events 5.91
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.06
17 TestDownloadOnly/v1.28.0-rc.0/json-events 12.53
18 TestDownloadOnly/v1.28.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.12
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
26 TestBinaryMirror 0.54
27 TestOffline 81.99
29 TestAddons/Setup 153.7
31 TestAddons/parallel/Registry 39.55
32 TestAddons/parallel/Ingress 22.69
33 TestAddons/parallel/InspektorGadget 48.66
34 TestAddons/parallel/MetricsServer 6.14
35 TestAddons/parallel/HelmTiller 42.48
37 TestAddons/parallel/CSI 56.93
38 TestAddons/parallel/Headlamp 37.29
39 TestAddons/parallel/CloudSpanner 5.58
42 TestAddons/serial/GCPAuth/Namespaces 0.13
43 TestAddons/StoppedEnableDisable 13.37
44 TestCertOptions 61.03
45 TestCertExpiration 345.51
46 TestDockerFlags 103.79
47 TestForceSystemdFlag 87.77
48 TestForceSystemdEnv 62.7
50 TestKVMDriverInstallOrUpdate 3.24
54 TestErrorSpam/setup 50.6
55 TestErrorSpam/start 0.33
56 TestErrorSpam/status 0.74
57 TestErrorSpam/pause 1.19
58 TestErrorSpam/unpause 1.3
59 TestErrorSpam/stop 13.22
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 69.37
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 34.59
66 TestFunctional/serial/KubeContext 0.04
67 TestFunctional/serial/KubectlGetPods 0.09
70 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
71 TestFunctional/serial/CacheCmd/cache/add_local 1.41
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
73 TestFunctional/serial/CacheCmd/cache/list 0.04
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
75 TestFunctional/serial/CacheCmd/cache/cache_reload 1.17
76 TestFunctional/serial/CacheCmd/cache/delete 0.08
77 TestFunctional/serial/MinikubeKubectlCmd 0.1
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
79 TestFunctional/serial/ExtraConfig 41.46
80 TestFunctional/serial/ComponentHealth 0.06
81 TestFunctional/serial/LogsCmd 1.07
82 TestFunctional/serial/LogsFileCmd 1.12
83 TestFunctional/serial/InvalidService 5.23
85 TestFunctional/parallel/ConfigCmd 0.3
86 TestFunctional/parallel/DashboardCmd 44.54
87 TestFunctional/parallel/DryRun 0.27
88 TestFunctional/parallel/InternationalLanguage 0.15
89 TestFunctional/parallel/StatusCmd 1.03
93 TestFunctional/parallel/ServiceCmdConnect 13.65
94 TestFunctional/parallel/AddonsCmd 0.12
95 TestFunctional/parallel/PersistentVolumeClaim 59.82
97 TestFunctional/parallel/SSHCmd 0.5
98 TestFunctional/parallel/CpCmd 0.93
99 TestFunctional/parallel/MySQL 40.07
100 TestFunctional/parallel/FileSync 0.22
101 TestFunctional/parallel/CertSync 1.44
105 TestFunctional/parallel/NodeLabels 0.1
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
109 TestFunctional/parallel/License 0.2
110 TestFunctional/parallel/Version/short 0.04
111 TestFunctional/parallel/Version/components 0.97
112 TestFunctional/parallel/DockerEnv/bash 1.14
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
121 TestFunctional/parallel/ImageCommands/Setup 1.36
122 TestFunctional/parallel/ServiceCmd/DeployApp 13.36
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.23
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.43
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.1
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.99
136 TestFunctional/parallel/ServiceCmd/List 0.33
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
140 TestFunctional/parallel/ServiceCmd/Format 0.44
141 TestFunctional/parallel/ServiceCmd/URL 0.36
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.6
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
144 TestFunctional/parallel/ProfileCmd/profile_list 0.26
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
146 TestFunctional/parallel/MountCmd/any-port 28.92
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.96
148 TestFunctional/parallel/MountCmd/specific-port 2.14
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.33
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.01
153 TestGvisorAddon 272.63
156 TestImageBuild/serial/Setup 50.13
157 TestImageBuild/serial/NormalBuild 1.55
158 TestImageBuild/serial/BuildWithBuildArg 1.24
159 TestImageBuild/serial/BuildWithDockerIgnore 0.36
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.27
163 TestIngressAddonLegacy/StartLegacyK8sCluster 81.33
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.96
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.6
170 TestJSONOutput/start/Command 104.17
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.57
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.52
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 13.09
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.18
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 111.24
202 TestMountStart/serial/StartWithMountFirst 30.54
203 TestMountStart/serial/VerifyMountFirst 0.37
204 TestMountStart/serial/StartWithMountSecond 31.44
205 TestMountStart/serial/VerifyMountSecond 0.37
206 TestMountStart/serial/DeleteFirst 0.86
207 TestMountStart/serial/VerifyMountPostDelete 0.37
208 TestMountStart/serial/Stop 2.08
209 TestMountStart/serial/RestartStopped 23.75
210 TestMountStart/serial/VerifyMountPostStop 0.36
213 TestMultiNode/serial/FreshStart2Nodes 121.9
214 TestMultiNode/serial/DeployApp2Nodes 4.89
215 TestMultiNode/serial/PingHostFrom2Pods 0.92
216 TestMultiNode/serial/AddNode 46.36
217 TestMultiNode/serial/ProfileList 0.19
218 TestMultiNode/serial/CopyFile 7.11
219 TestMultiNode/serial/StopNode 3.91
220 TestMultiNode/serial/StartAfterStop 32.13
222 TestMultiNode/serial/DeleteNode 4.75
223 TestMultiNode/serial/StopMultiNode 26.29
224 TestMultiNode/serial/RestartMultiNode 106.15
225 TestMultiNode/serial/ValidateNameConflict 52.53
230 TestPreload 185.25
232 TestScheduledStopUnix 122
233 TestSkaffold 139.84
236 TestRunningBinaryUpgrade 170.43
238 TestKubernetesUpgrade 253.51
251 TestStoppedBinaryUpgrade/Setup 0.75
252 TestStoppedBinaryUpgrade/Upgrade 186.77
261 TestPause/serial/Start 75.34
262 TestPause/serial/SecondStartNoReconfiguration 50.74
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
266 TestNoKubernetes/serial/StartWithK8s 61.16
267 TestPause/serial/Pause 0.76
268 TestPause/serial/VerifyStatus 0.31
269 TestPause/serial/Unpause 0.67
270 TestPause/serial/PauseAgain 0.78
271 TestPause/serial/DeletePaused 1.28
272 TestPause/serial/VerifyDeletedResources 13.92
273 TestNoKubernetes/serial/StartWithStopK8s 53.74
274 TestNoKubernetes/serial/Start 46.04
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
277 TestNetworkPlugins/group/auto/Start 110.11
278 TestNetworkPlugins/group/kindnet/Start 99.06
279 TestNetworkPlugins/group/calico/Start 140.39
280 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
281 TestNetworkPlugins/group/auto/KubeletFlags 0.25
282 TestNetworkPlugins/group/auto/NetCatPod 12.53
283 TestNetworkPlugins/group/custom-flannel/Start 88.7
284 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
285 TestNetworkPlugins/group/kindnet/NetCatPod 12.51
286 TestNetworkPlugins/group/auto/DNS 0.23
287 TestNetworkPlugins/group/auto/Localhost 0.18
288 TestNetworkPlugins/group/auto/HairPin 0.2
289 TestNetworkPlugins/group/kindnet/DNS 0.32
290 TestNetworkPlugins/group/kindnet/Localhost 0.19
291 TestNetworkPlugins/group/kindnet/HairPin 0.21
292 TestNetworkPlugins/group/false/Start 83
293 TestNetworkPlugins/group/enable-default-cni/Start 108.74
294 TestNetworkPlugins/group/calico/ControllerPod 5.03
295 TestNetworkPlugins/group/calico/KubeletFlags 0.22
296 TestNetworkPlugins/group/calico/NetCatPod 12.51
297 TestNetworkPlugins/group/calico/DNS 0.24
298 TestNetworkPlugins/group/calico/Localhost 0.21
299 TestNetworkPlugins/group/calico/HairPin 0.21
300 TestNetworkPlugins/group/flannel/Start 96.44
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.46
303 TestNetworkPlugins/group/custom-flannel/DNS 0.24
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
306 TestNetworkPlugins/group/false/KubeletFlags 0.23
307 TestNetworkPlugins/group/false/NetCatPod 13.47
308 TestNetworkPlugins/group/bridge/Start 79.24
309 TestNetworkPlugins/group/false/DNS 16.81
310 TestNetworkPlugins/group/false/Localhost 0.19
311 TestNetworkPlugins/group/false/HairPin 0.22
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.5
314 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
315 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
316 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
317 TestNetworkPlugins/group/kubenet/Start 87.77
319 TestStartStop/group/old-k8s-version/serial/FirstStart 169.53
320 TestNetworkPlugins/group/flannel/ControllerPod 5.02
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
322 TestNetworkPlugins/group/flannel/NetCatPod 12.44
323 TestNetworkPlugins/group/flannel/DNS 0.27
324 TestNetworkPlugins/group/flannel/Localhost 0.22
325 TestNetworkPlugins/group/flannel/HairPin 0.2
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
327 TestNetworkPlugins/group/bridge/NetCatPod 13.49
328 TestNetworkPlugins/group/bridge/DNS 0.18
329 TestNetworkPlugins/group/bridge/Localhost 0.18
330 TestNetworkPlugins/group/bridge/HairPin 0.21
332 TestStartStop/group/no-preload/serial/FirstStart 100.47
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.94
335 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
336 TestNetworkPlugins/group/kubenet/NetCatPod 11.44
337 TestNetworkPlugins/group/kubenet/DNS 0.22
338 TestNetworkPlugins/group/kubenet/Localhost 0.15
339 TestNetworkPlugins/group/kubenet/HairPin 0.18
341 TestStartStop/group/newest-cni/serial/FirstStart 85.86
342 TestStartStop/group/no-preload/serial/DeployApp 10.57
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.43
344 TestStartStop/group/no-preload/serial/Stop 13.12
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.63
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
347 TestStartStop/group/no-preload/serial/SecondStart 330.99
348 TestStartStop/group/old-k8s-version/serial/DeployApp 9.53
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.31
350 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.12
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
352 TestStartStop/group/old-k8s-version/serial/Stop 13.15
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 314.42
355 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
357 TestStartStop/group/newest-cni/serial/Stop 12.13
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
359 TestStartStop/group/old-k8s-version/serial/SecondStart 479.24
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
361 TestStartStop/group/newest-cni/serial/SecondStart 83.74
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
365 TestStartStop/group/newest-cni/serial/Pause 2.33
367 TestStartStop/group/embed-certs/serial/FirstStart 76.98
368 TestStartStop/group/embed-certs/serial/DeployApp 8.52
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
370 TestStartStop/group/embed-certs/serial/Stop 13.12
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/embed-certs/serial/SecondStart 332.72
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/no-preload/serial/Pause 2.74
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/old-k8s-version/serial/Pause 2.52
385 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/embed-certs/serial/Pause 2.49
x
+
TestDownloadOnly/v1.16.0/json-events (15.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498302 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498302 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (15.417780172s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498302
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498302: exit status 85 (55.73043ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-498302 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-498302        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:00:42
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:00:42.736779   16848 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:00:42.736917   16848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:00:42.736927   16848 out.go:309] Setting ErrFile to fd 2...
	I0811 23:00:42.736931   16848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:00:42.737146   16848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	W0811 23:00:42.737304   16848 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17044-9593/.minikube/config/config.json: open /home/jenkins/minikube-integration/17044-9593/.minikube/config/config.json: no such file or directory
	I0811 23:00:42.737908   16848 out.go:303] Setting JSON to true
	I0811 23:00:42.738757   16848 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2597,"bootTime":1691792246,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:00:42.738815   16848 start.go:138] virtualization: kvm guest
	I0811 23:00:42.741490   16848 out.go:97] [download-only-498302] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	W0811 23:00:42.741593   16848 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball: no such file or directory
	I0811 23:00:42.743262   16848 out.go:169] MINIKUBE_LOCATION=17044
	I0811 23:00:42.741684   16848 notify.go:220] Checking for updates...
	I0811 23:00:42.746225   16848 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:00:42.747745   16848 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:00:42.749191   16848 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:00:42.750581   16848 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0811 23:00:42.753285   16848 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0811 23:00:42.753555   16848 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:00:42.868395   16848 out.go:97] Using the kvm2 driver based on user configuration
	I0811 23:00:42.868447   16848 start.go:298] selected driver: kvm2
	I0811 23:00:42.868455   16848 start.go:901] validating driver "kvm2" against <nil>
	I0811 23:00:42.868750   16848 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:00:42.868863   16848 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17044-9593/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0811 23:00:42.882657   16848 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.1
	I0811 23:00:42.882702   16848 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:00:42.883196   16848 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0811 23:00:42.883344   16848 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0811 23:00:42.883375   16848 cni.go:84] Creating CNI manager for ""
	I0811 23:00:42.883396   16848 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0811 23:00:42.883405   16848 start_flags.go:319] config:
	{Name:download-only-498302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-498302 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:00:42.883594   16848 iso.go:125] acquiring lock: {Name:mkbb435ea885d9d203ce0113f8005e4b53bc59ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:00:42.885510   16848 out.go:97] Downloading VM boot image ...
	I0811 23:00:42.885530   16848 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17044-9593/.minikube/cache/iso/amd64/minikube-v1.31.0-1690838458-16971-amd64.iso
	I0811 23:00:45.526311   16848 out.go:97] Starting control plane node download-only-498302 in cluster download-only-498302
	I0811 23:00:45.526331   16848 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0811 23:00:45.552424   16848 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0811 23:00:45.552462   16848 cache.go:57] Caching tarball of preloaded images
	I0811 23:00:45.552623   16848 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0811 23:00:45.554989   16848 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0811 23:00:45.555013   16848 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:00:45.586688   16848 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0811 23:00:50.660864   16848 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:00:50.660952   16848 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:00:51.382000   16848 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0811 23:00:51.382316   16848 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/download-only-498302/config.json ...
	I0811 23:00:51.382343   16848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/download-only-498302/config.json: {Name:mk3a3eda183895fbec92fe5096dead68903d7b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:00:51.382497   16848 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0811 23:00:51.382686   16848 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17044-9593/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-498302"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (5.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498302 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498302 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=docker --driver=kvm2 : (5.904956084s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (5.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498302
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498302: exit status 85 (55.447517ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-498302 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-498302        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-498302 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-498302        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:00:58
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:00:58.212588   16917 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:00:58.212711   16917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:00:58.212719   16917 out.go:309] Setting ErrFile to fd 2...
	I0811 23:00:58.212723   16917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:00:58.212906   16917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	W0811 23:00:58.213015   16917 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17044-9593/.minikube/config/config.json: open /home/jenkins/minikube-integration/17044-9593/.minikube/config/config.json: no such file or directory
	I0811 23:00:58.213404   16917 out.go:303] Setting JSON to true
	I0811 23:00:58.214166   16917 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2612,"bootTime":1691792246,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:00:58.214219   16917 start.go:138] virtualization: kvm guest
	I0811 23:00:58.216263   16917 out.go:97] [download-only-498302] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0811 23:00:58.218026   16917 out.go:169] MINIKUBE_LOCATION=17044
	I0811 23:00:58.216451   16917 notify.go:220] Checking for updates...
	I0811 23:00:58.221414   16917 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:00:58.222946   16917 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:00:58.224638   16917 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:00:58.226028   16917 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0811 23:00:58.228904   16917 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0811 23:00:58.229506   16917 config.go:182] Loaded profile config "download-only-498302": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0811 23:00:58.229557   16917 start.go:809] api.Load failed for download-only-498302: filestore "download-only-498302": Docker machine "download-only-498302" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:00:58.229682   16917 driver.go:373] Setting default libvirt URI to qemu:///system
	W0811 23:00:58.229732   16917 start.go:809] api.Load failed for download-only-498302: filestore "download-only-498302": Docker machine "download-only-498302" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:00:58.261917   16917 out.go:97] Using the kvm2 driver based on existing profile
	I0811 23:00:58.261942   16917 start.go:298] selected driver: kvm2
	I0811 23:00:58.261948   16917 start.go:901] validating driver "kvm2" against &{Name:download-only-498302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-498302 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:00:58.262341   16917 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:00:58.262407   16917 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17044-9593/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0811 23:00:58.276508   16917 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.1
	I0811 23:00:58.277157   16917 cni.go:84] Creating CNI manager for ""
	I0811 23:00:58.277179   16917 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0811 23:00:58.277194   16917 start_flags.go:319] config:
	{Name:download-only-498302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-498302 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:00:58.277346   16917 iso.go:125] acquiring lock: {Name:mkbb435ea885d9d203ce0113f8005e4b53bc59ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:00:58.279183   16917 out.go:97] Starting control plane node download-only-498302 in cluster download-only-498302
	I0811 23:00:58.279201   16917 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:00:58.301757   16917 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4
	I0811 23:00:58.301787   16917 cache.go:57] Caching tarball of preloaded images
	I0811 23:00:58.301918   16917 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0811 23:00:58.303752   16917 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0811 23:00:58.303767   16917 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:00:58.333031   16917 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4?checksum=md5:57da30b73c6409bef80873fc9e1b0d5b -> /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4
	I0811 23:01:02.494857   16917 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:01:02.494941   16917 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-498302"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/json-events (12.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498302 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498302 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=docker --driver=kvm2 : (12.533022948s)
--- PASS: TestDownloadOnly/v1.28.0-rc.0/json-events (12.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498302
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498302: exit status 85 (57.523859ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-498302 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-498302           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-498302 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-498302           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-498302 | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |          |
	|         | -p download-only-498302           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:01:04
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:01:04.175337   16961 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:01:04.175443   16961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:04.175452   16961 out.go:309] Setting ErrFile to fd 2...
	I0811 23:01:04.175456   16961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:04.175663   16961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	W0811 23:01:04.175775   16961 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17044-9593/.minikube/config/config.json: open /home/jenkins/minikube-integration/17044-9593/.minikube/config/config.json: no such file or directory
	I0811 23:01:04.176158   16961 out.go:303] Setting JSON to true
	I0811 23:01:04.177190   16961 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2618,"bootTime":1691792246,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:01:04.177313   16961 start.go:138] virtualization: kvm guest
	I0811 23:01:04.179690   16961 out.go:97] [download-only-498302] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0811 23:01:04.181226   16961 out.go:169] MINIKUBE_LOCATION=17044
	I0811 23:01:04.179835   16961 notify.go:220] Checking for updates...
	I0811 23:01:04.184334   16961 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:01:04.185836   16961 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:01:04.187270   16961 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:01:04.188598   16961 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0811 23:01:04.191253   16961 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0811 23:01:04.191610   16961 config.go:182] Loaded profile config "download-only-498302": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	W0811 23:01:04.191644   16961 start.go:809] api.Load failed for download-only-498302: filestore "download-only-498302": Docker machine "download-only-498302" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:01:04.191728   16961 driver.go:373] Setting default libvirt URI to qemu:///system
	W0811 23:01:04.191755   16961 start.go:809] api.Load failed for download-only-498302: filestore "download-only-498302": Docker machine "download-only-498302" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:01:04.222344   16961 out.go:97] Using the kvm2 driver based on existing profile
	I0811 23:01:04.222362   16961 start.go:298] selected driver: kvm2
	I0811 23:01:04.222367   16961 start.go:901] validating driver "kvm2" against &{Name:download-only-498302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.4 ClusterName:download-only-498302 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:04.222706   16961 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:01:04.222762   16961 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17044-9593/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0811 23:01:04.236343   16961 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.1
	I0811 23:01:04.236958   16961 cni.go:84] Creating CNI manager for ""
	I0811 23:01:04.236972   16961 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0811 23:01:04.236980   16961 start_flags.go:319] config:
	{Name:download-only-498302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:download-only-498302 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:04.237106   16961 iso.go:125] acquiring lock: {Name:mkbb435ea885d9d203ce0113f8005e4b53bc59ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:01:04.238789   16961 out.go:97] Starting control plane node download-only-498302 in cluster download-only-498302
	I0811 23:01:04.238810   16961 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0811 23:01:04.269328   16961 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0811 23:01:04.269358   16961 cache.go:57] Caching tarball of preloaded images
	I0811 23:01:04.269491   16961 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0811 23:01:04.271411   16961 out.go:97] Downloading Kubernetes v1.28.0-rc.0 preload ...
	I0811 23:01:04.271428   16961 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:01:04.302407   16961 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:c58517d7db3f18e24d5fe8f6ec89509d -> /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0811 23:01:09.489507   16961 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:01:09.489617   16961 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17044-9593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0811 23:01:10.310863   16961 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.0 on docker
	I0811 23:01:10.311024   16961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/download-only-498302/config.json ...
	I0811 23:01:10.311276   16961 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0811 23:01:10.311491   16961 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17044-9593/.minikube/cache/linux/amd64/v1.28.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-498302"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-498302
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-484210 --alsologtostderr --binary-mirror http://127.0.0.1:38281 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-484210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-484210
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (81.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-165129 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-165129 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m20.940409191s)
helpers_test.go:175: Cleaning up "offline-docker-165129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-165129
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-165129: (1.051533791s)
--- PASS: TestOffline (81.99s)

                                                
                                    
x
+
TestAddons/Setup (153.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-894170 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-894170 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.703278987s)
--- PASS: TestAddons/Setup (153.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (39.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 23.959165ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hgfs5" [080441f5-329a-4e80-80b6-a81de36e2b00] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.026486125s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bq8ns" [9e49d948-7871-4201-a8e6-c99e9995ff0c] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017363202s
addons_test.go:316: (dbg) Run:  kubectl --context addons-894170 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-894170 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-894170 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (28.684122557s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 ip
2023/08/11 23:04:30 [DEBUG] GET http://192.168.39.162:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (39.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-894170 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-894170 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-894170 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8414e9f7-e69e-4ee8-9799-2c76acfe6058] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8414e9f7-e69e-4ee8-9799-2c76acfe6058] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.168837397s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-894170 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.162
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-894170 addons disable ingress-dns --alsologtostderr -v=1: (1.257377034s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-894170 addons disable ingress --alsologtostderr -v=1: (7.696081483s)
--- PASS: TestAddons/parallel/Ingress (22.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (48.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gr5tx" [966c02e9-614a-4f37-846b-d3d19abda48d] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.036890799s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-894170
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-894170: (43.61754778s)
--- PASS: TestAddons/parallel/InspektorGadget (48.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.14s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 24.174277ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-shtrj" [6dc8fc22-1613-467d-91fe-4211d17550bc] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.025497855s
addons_test.go:391: (dbg) Run:  kubectl --context addons-894170 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-894170 addons disable metrics-server --alsologtostderr -v=1: (1.013253693s)
--- PASS: TestAddons/parallel/MetricsServer (6.14s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (42.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 23.894375ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-wtddt" [4167a52f-6f8b-4dca-a839-512ce65e9d71] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.024739314s
addons_test.go:449: (dbg) Run:  kubectl --context addons-894170 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-894170 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (36.875955067s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (42.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.842982ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-894170 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-894170 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [970e3c78-4021-4851-809b-7fc9ef207fb4] Pending
helpers_test.go:344: "task-pv-pod" [970e3c78-4021-4851-809b-7fc9ef207fb4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [970e3c78-4021-4851-809b-7fc9ef207fb4] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.020137711s
addons_test.go:560: (dbg) Run:  kubectl --context addons-894170 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-894170 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-894170 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-894170 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-894170 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-894170 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-894170 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894170 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-894170 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [009282b1-b95c-4d50-af65-a8ee7e963fc3] Pending
helpers_test.go:344: "task-pv-pod-restore" [009282b1-b95c-4d50-af65-a8ee7e963fc3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [009282b1-b95c-4d50-af65-a8ee7e963fc3] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.024898191s
addons_test.go:602: (dbg) Run:  kubectl --context addons-894170 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-894170 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-894170 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-894170 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.830145299s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-894170 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-894170 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-894170 --alsologtostderr -v=1: (1.226997408s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-nqq9q" [7a946923-c632-4eb3-baa7-609193a8ad4f] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-nqq9q" [7a946923-c632-4eb3-baa7-609193a8ad4f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-nqq9q" [7a946923-c632-4eb3-baa7-609193a8ad4f] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 36.058334097s
--- PASS: TestAddons/parallel/Headlamp (37.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-7dh2v" [525dfd58-8287-4704-8a73-66728e04597c] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012943808s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-894170
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-894170 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-894170 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-894170
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-894170: (13.1197312s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-894170
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-894170
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-894170
--- PASS: TestAddons/StoppedEnableDisable (13.37s)

                                                
                                    
x
+
TestCertOptions (61.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-536015 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0811 23:42:35.144398   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-536015 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (59.570449021s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-536015 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-536015 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-536015 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-536015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-536015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-536015: (1.00706716s)
--- PASS: TestCertOptions (61.03s)

                                                
                                    
x
+
TestCertExpiration (345.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-690855 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-690855 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m15.74390325s)
E0811 23:41:54.183796   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:54.385500   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-690855 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0811 23:46:13.100770   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-690855 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m28.678440554s)
helpers_test.go:175: Cleaning up "cert-expiration-690855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-690855
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-690855: (1.088343346s)
--- PASS: TestCertExpiration (345.51s)

                                                
                                    
x
+
TestDockerFlags (103.79s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-918146 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-918146 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m42.325398728s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-918146 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-918146 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-918146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-918146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-918146: (1.034590948s)
--- PASS: TestDockerFlags (103.79s)

                                                
                                    
x
+
TestForceSystemdFlag (87.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-189649 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-189649 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m26.51804066s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-189649 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-189649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-189649
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-189649: (1.012060304s)
--- PASS: TestForceSystemdFlag (87.77s)

                                                
                                    
x
+
TestForceSystemdEnv (62.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-710034 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E0811 23:43:29.912077   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-710034 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m1.428205152s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-710034 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-710034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-710034
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-710034: (1.051547142s)
--- PASS: TestForceSystemdEnv (62.70s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.24s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.24s)

                                                
                                    
x
+
TestErrorSpam/setup (50.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-666970 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-666970 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-666970 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-666970 --driver=kvm2 : (50.600223858s)
--- PASS: TestErrorSpam/setup (50.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 unpause
--- PASS: TestErrorSpam/unpause (1.30s)

                                                
                                    
x
+
TestErrorSpam/stop (13.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 stop: (13.090537242s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-666970 --log_dir /tmp/nospam-666970 stop
--- PASS: TestErrorSpam/stop (13.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17044-9593/.minikube/files/etc/test/nested/copy/16836/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035969 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-035969 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m9.373723898s)
--- PASS: TestFunctional/serial/StartWithProxy (69.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035969 --alsologtostderr -v=8
E0811 23:08:51.339728   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.345634   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.355886   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.376195   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.416462   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.496802   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.657291   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:51.977392   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:52.618457   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:08:53.898995   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-035969 --alsologtostderr -v=8: (34.590753933s)
functional_test.go:659: soft start took 34.591335718s for "functional-035969" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-035969 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cache add registry.k8s.io/pause:3.3
E0811 23:08:56.459912   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-035969 /tmp/TestFunctionalserialCacheCmdcacheadd_local299236254/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cache add minikube-local-cache-test:functional-035969
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 cache add minikube-local-cache-test:functional-035969: (1.093648839s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cache delete minikube-local-cache-test:functional-035969
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-035969
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.044508ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 kubectl -- --context functional-035969 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-035969 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035969 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0811 23:09:01.580118   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:09:11.821057   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:09:32.301280   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-035969 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.460465601s)
functional_test.go:757: restart took 41.460578854s for "functional-035969" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-035969 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 logs: (1.066047933s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 logs --file /tmp/TestFunctionalserialLogsFileCmd4017523845/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 logs --file /tmp/TestFunctionalserialLogsFileCmd4017523845/001/logs.txt: (1.118139418s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-035969 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-035969
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-035969: exit status 115 (289.686129ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.75:30390 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-035969 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-035969 delete -f testdata/invalidsvc.yaml: (1.562416619s)
--- PASS: TestFunctional/serial/InvalidService (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 config get cpus: exit status 14 (48.241368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 config get cpus: exit status 14 (51.01031ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-035969 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-035969 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23467: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035969 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-035969 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (131.150554ms)

                                                
                                                
-- stdout --
	* [functional-035969] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:10:10.382273   23340 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:10:10.382629   23340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:10:10.382673   23340 out.go:309] Setting ErrFile to fd 2...
	I0811 23:10:10.382691   23340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:10:10.383230   23340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	I0811 23:10:10.384654   23340 out.go:303] Setting JSON to false
	I0811 23:10:10.385580   23340 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3165,"bootTime":1691792246,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:10:10.385637   23340 start.go:138] virtualization: kvm guest
	I0811 23:10:10.387320   23340 out.go:177] * [functional-035969] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0811 23:10:10.389475   23340 notify.go:220] Checking for updates...
	I0811 23:10:10.389497   23340 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:10:10.391049   23340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:10:10.392527   23340 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:10:10.394037   23340 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:10:10.395561   23340 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0811 23:10:10.397016   23340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:10:10.399041   23340 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:10:10.399597   23340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:10:10.399665   23340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:10:10.414132   23340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0811 23:10:10.414567   23340 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:10:10.415058   23340 main.go:141] libmachine: Using API Version  1
	I0811 23:10:10.415083   23340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:10:10.415474   23340 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:10:10.415652   23340 main.go:141] libmachine: (functional-035969) Calling .DriverName
	I0811 23:10:10.415925   23340 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:10:10.416336   23340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:10:10.416391   23340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:10:10.430307   23340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0811 23:10:10.430703   23340 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:10:10.431308   23340 main.go:141] libmachine: Using API Version  1
	I0811 23:10:10.431338   23340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:10:10.431779   23340 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:10:10.431965   23340 main.go:141] libmachine: (functional-035969) Calling .DriverName
	I0811 23:10:10.465438   23340 out.go:177] * Using the kvm2 driver based on existing profile
	I0811 23:10:10.467037   23340 start.go:298] selected driver: kvm2
	I0811 23:10:10.467050   23340 start.go:901] validating driver "kvm2" against &{Name:functional-035969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.4 ClusterName:functional-035969 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:10:10.467246   23340 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:10:10.469594   23340 out.go:177] 
	W0811 23:10:10.471229   23340 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0811 23:10:10.472687   23340 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035969 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035969 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-035969 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (147.187276ms)

                                                
                                                
-- stdout --
	* [functional-035969] minikube v1.31.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:10:10.670964   23395 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:10:10.671091   23395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:10:10.671117   23395 out.go:309] Setting ErrFile to fd 2...
	I0811 23:10:10.671125   23395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:10:10.671540   23395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	I0811 23:10:10.672249   23395 out.go:303] Setting JSON to false
	I0811 23:10:10.673473   23395 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3165,"bootTime":1691792246,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0811 23:10:10.673559   23395 start.go:138] virtualization: kvm guest
	I0811 23:10:10.676269   23395 out.go:177] * [functional-035969] minikube v1.31.1 sur Ubuntu 20.04 (kvm/amd64)
	I0811 23:10:10.678523   23395 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:10:10.678557   23395 notify.go:220] Checking for updates...
	I0811 23:10:10.680525   23395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:10:10.682352   23395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	I0811 23:10:10.684148   23395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	I0811 23:10:10.685791   23395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0811 23:10:10.687650   23395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:10:10.689646   23395 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:10:10.690025   23395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:10:10.690070   23395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:10:10.705275   23395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0811 23:10:10.705696   23395 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:10:10.706234   23395 main.go:141] libmachine: Using API Version  1
	I0811 23:10:10.706258   23395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:10:10.706743   23395 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:10:10.706965   23395 main.go:141] libmachine: (functional-035969) Calling .DriverName
	I0811 23:10:10.707259   23395 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:10:10.707676   23395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:10:10.707719   23395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:10:10.722351   23395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0811 23:10:10.722790   23395 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:10:10.723297   23395 main.go:141] libmachine: Using API Version  1
	I0811 23:10:10.723326   23395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:10:10.723627   23395 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:10:10.723819   23395 main.go:141] libmachine: (functional-035969) Calling .DriverName
	I0811 23:10:10.755934   23395 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0811 23:10:10.757886   23395 start.go:298] selected driver: kvm2
	I0811 23:10:10.757896   23395 start.go:901] validating driver "kvm2" against &{Name:functional-035969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.4 ClusterName:functional-035969 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:10:10.758034   23395 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:10:10.760105   23395 out.go:177] 
	W0811 23:10:10.761471   23395 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0811 23:10:10.762837   23395 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-035969 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-035969 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-4wfhv" [32142042-b68b-4590-9fbf-e31da0a0f09c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-4wfhv" [32142042-b68b-4590-9fbf-e31da0a0f09c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.028856191s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.75:30933
functional_test.go:1674: http://192.168.39.75:30933: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-4wfhv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.75:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.75:30933
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (59.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [667d4f64-a144-44aa-8f31-2803acc764d7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015181779s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-035969 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-035969 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-035969 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-035969 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-035969 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [72100642-0db5-4b90-9f8e-6c5da543c321] Pending
helpers_test.go:344: "sp-pod" [72100642-0db5-4b90-9f8e-6c5da543c321] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [72100642-0db5-4b90-9f8e-6c5da543c321] Running
E0811 23:10:13.261721   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.020355078s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-035969 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-035969 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-035969 delete -f testdata/storage-provisioner/pod.yaml: (1.695736841s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-035969 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c274bb7a-2c38-4763-a946-6de7b95e2864] Pending
helpers_test.go:344: "sp-pod" [c274bb7a-2c38-4763-a946-6de7b95e2864] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c274bb7a-2c38-4763-a946-6de7b95e2864] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.023510036s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-035969 exec sp-pod -- ls /tmp/mount
2023/08/11 23:10:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (59.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh -n functional-035969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 cp functional-035969:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3163586276/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh -n functional-035969 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-035969 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-vz2kt" [c028e2bf-e159-42e8-8976-07e15afadc27] Pending
helpers_test.go:344: "mysql-7db894d786-vz2kt" [c028e2bf-e159-42e8-8976-07e15afadc27] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-vz2kt" [c028e2bf-e159-42e8-8976-07e15afadc27] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.026047392s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;": exit status 1 (206.415243ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;": exit status 1 (267.745626ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;": exit status 1 (174.120832ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-035969 exec mysql-7db894d786-vz2kt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16836/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /etc/test/nested/copy/16836/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16836.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /etc/ssl/certs/16836.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16836.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /usr/share/ca-certificates/16836.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/168362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /etc/ssl/certs/168362.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/168362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /usr/share/ca-certificates/168362.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-035969 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh "sudo systemctl is-active crio": exit status 1 (235.962309ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-035969 docker-env) && out/minikube-linux-amd64 status -p functional-035969"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-035969 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035969 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-035969
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-035969
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035969 image ls --format short --alsologtostderr:
I0811 23:10:41.676366   24129 out.go:296] Setting OutFile to fd 1 ...
I0811 23:10:41.676484   24129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:41.676495   24129 out.go:309] Setting ErrFile to fd 2...
I0811 23:10:41.676500   24129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:41.676726   24129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
I0811 23:10:41.677334   24129 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:41.677445   24129 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:41.677811   24129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:41.677897   24129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:41.692658   24129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
I0811 23:10:41.693075   24129 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:41.693962   24129 main.go:141] libmachine: Using API Version  1
I0811 23:10:41.693984   24129 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:41.694311   24129 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:41.694495   24129 main.go:141] libmachine: (functional-035969) Calling .GetState
I0811 23:10:41.696552   24129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:41.696598   24129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:41.711472   24129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
I0811 23:10:41.711953   24129 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:41.712466   24129 main.go:141] libmachine: Using API Version  1
I0811 23:10:41.712500   24129 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:41.712785   24129 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:41.712943   24129 main.go:141] libmachine: (functional-035969) Calling .DriverName
I0811 23:10:41.713300   24129 ssh_runner.go:195] Run: systemctl --version
I0811 23:10:41.713333   24129 main.go:141] libmachine: (functional-035969) Calling .GetSSHHostname
I0811 23:10:41.715924   24129 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:41.716389   24129 main.go:141] libmachine: (functional-035969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:3d:35", ip: ""} in network mk-functional-035969: {Iface:virbr1 ExpiryTime:2023-08-12 00:07:26 +0000 UTC Type:0 Mac:52:54:00:8b:3d:35 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-035969 Clientid:01:52:54:00:8b:3d:35}
I0811 23:10:41.716459   24129 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined IP address 192.168.39.75 and MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:41.716607   24129 main.go:141] libmachine: (functional-035969) Calling .GetSSHPort
I0811 23:10:41.716775   24129 main.go:141] libmachine: (functional-035969) Calling .GetSSHKeyPath
I0811 23:10:41.716912   24129 main.go:141] libmachine: (functional-035969) Calling .GetSSHUsername
I0811 23:10:41.717174   24129 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/functional-035969/id_rsa Username:docker}
I0811 23:10:41.854173   24129 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0811 23:10:41.886341   24129 main.go:141] libmachine: Making call to close driver server
I0811 23:10:41.886368   24129 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:41.886628   24129 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:41.886647   24129 main.go:141] libmachine: Making call to close connection to plugin binary
I0811 23:10:41.886666   24129 main.go:141] libmachine: Making call to close driver server
I0811 23:10:41.886676   24129 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:41.886886   24129 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:41.886922   24129 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:41.886941   24129 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035969 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.27.4           | 6848d7eda0341 | 71.1MB |
| registry.k8s.io/kube-scheduler              | v1.27.4           | 98ef2570f3cde | 58.4MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 86b6af7dd652c | 296MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-035969 | c9abb275b1d76 | 30B    |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| docker.io/library/nginx                     | latest            | 89da1fb6dcb96 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.27.4           | e7972205b6614 | 121MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager     | v1.27.4           | f466468864b7a | 113MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-035969 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-035969 | 8db5bcf298129 | 1.24MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035969 image ls --format table --alsologtostderr:
I0811 23:10:45.345125   24307 out.go:296] Setting OutFile to fd 1 ...
I0811 23:10:45.345240   24307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:45.345248   24307 out.go:309] Setting ErrFile to fd 2...
I0811 23:10:45.345254   24307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:45.345450   24307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
I0811 23:10:45.345981   24307 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:45.346067   24307 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:45.346375   24307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:45.346428   24307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:45.360675   24307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
I0811 23:10:45.361078   24307 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:45.361680   24307 main.go:141] libmachine: Using API Version  1
I0811 23:10:45.361700   24307 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:45.362031   24307 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:45.362210   24307 main.go:141] libmachine: (functional-035969) Calling .GetState
I0811 23:10:45.364076   24307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:45.364457   24307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:45.378524   24307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
I0811 23:10:45.378878   24307 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:45.379315   24307 main.go:141] libmachine: Using API Version  1
I0811 23:10:45.379363   24307 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:45.379646   24307 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:45.379791   24307 main.go:141] libmachine: (functional-035969) Calling .DriverName
I0811 23:10:45.379969   24307 ssh_runner.go:195] Run: systemctl --version
I0811 23:10:45.379990   24307 main.go:141] libmachine: (functional-035969) Calling .GetSSHHostname
I0811 23:10:45.382455   24307 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:45.382829   24307 main.go:141] libmachine: (functional-035969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:3d:35", ip: ""} in network mk-functional-035969: {Iface:virbr1 ExpiryTime:2023-08-12 00:07:26 +0000 UTC Type:0 Mac:52:54:00:8b:3d:35 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-035969 Clientid:01:52:54:00:8b:3d:35}
I0811 23:10:45.382857   24307 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined IP address 192.168.39.75 and MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:45.383043   24307 main.go:141] libmachine: (functional-035969) Calling .GetSSHPort
I0811 23:10:45.383218   24307 main.go:141] libmachine: (functional-035969) Calling .GetSSHKeyPath
I0811 23:10:45.383389   24307 main.go:141] libmachine: (functional-035969) Calling .GetSSHUsername
I0811 23:10:45.383517   24307 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/functional-035969/id_rsa Username:docker}
I0811 23:10:45.488382   24307 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0811 23:10:45.519249   24307 main.go:141] libmachine: Making call to close driver server
I0811 23:10:45.519264   24307 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:45.519502   24307 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:45.519526   24307 main.go:141] libmachine: Making call to close connection to plugin binary
I0811 23:10:45.519530   24307 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:45.519539   24307 main.go:141] libmachine: Making call to close driver server
I0811 23:10:45.519547   24307 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:45.519765   24307 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:45.519782   24307 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035969 image ls --format json --alsologtostderr:
[{"id":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"113000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"8db5bcf298129da1724fce1b8b656414575a4fc8f34c5a94e6e45a87fa60752e","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-035969"],"size":"1240000"},{"id":"89da1fb6dcb964dd35c3f41b7b93ffc35eaf20bc61f2e1335fea710a18424287","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d2
0a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"58400000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"71100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"r
epoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"c9abb275b1d761e6b3438e5e84ac88dca5f9f33fd4e3bed7ddd9c01ac16b1594","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-035969"],"size":"30"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-035969"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"
240000"},{"id":"e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"121000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035969 image ls --format json --alsologtostderr:
I0811 23:10:45.261099   24283 out.go:296] Setting OutFile to fd 1 ...
I0811 23:10:45.261233   24283 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:45.261242   24283 out.go:309] Setting ErrFile to fd 2...
I0811 23:10:45.261246   24283 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:45.261458   24283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
I0811 23:10:45.262014   24283 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:45.262118   24283 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:45.262470   24283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:45.262531   24283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:45.277007   24283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
I0811 23:10:45.277461   24283 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:45.278034   24283 main.go:141] libmachine: Using API Version  1
I0811 23:10:45.278057   24283 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:45.278369   24283 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:45.278547   24283 main.go:141] libmachine: (functional-035969) Calling .GetState
I0811 23:10:45.280364   24283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:45.280416   24283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:45.299852   24283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
I0811 23:10:45.300256   24283 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:45.300749   24283 main.go:141] libmachine: Using API Version  1
I0811 23:10:45.300769   24283 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:45.301144   24283 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:45.301343   24283 main.go:141] libmachine: (functional-035969) Calling .DriverName
I0811 23:10:45.301552   24283 ssh_runner.go:195] Run: systemctl --version
I0811 23:10:45.301583   24283 main.go:141] libmachine: (functional-035969) Calling .GetSSHHostname
I0811 23:10:45.304267   24283 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:45.304700   24283 main.go:141] libmachine: (functional-035969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:3d:35", ip: ""} in network mk-functional-035969: {Iface:virbr1 ExpiryTime:2023-08-12 00:07:26 +0000 UTC Type:0 Mac:52:54:00:8b:3d:35 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-035969 Clientid:01:52:54:00:8b:3d:35}
I0811 23:10:45.304744   24283 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined IP address 192.168.39.75 and MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:45.304931   24283 main.go:141] libmachine: (functional-035969) Calling .GetSSHPort
I0811 23:10:45.305088   24283 main.go:141] libmachine: (functional-035969) Calling .GetSSHKeyPath
I0811 23:10:45.305250   24283 main.go:141] libmachine: (functional-035969) Calling .GetSSHUsername
I0811 23:10:45.305414   24283 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/functional-035969/id_rsa Username:docker}
I0811 23:10:45.402172   24283 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0811 23:10:45.444516   24283 main.go:141] libmachine: Making call to close driver server
I0811 23:10:45.444527   24283 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:45.444866   24283 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:45.444927   24283 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:45.444959   24283 main.go:141] libmachine: Making call to close connection to plugin binary
I0811 23:10:45.444982   24283 main.go:141] libmachine: Making call to close driver server
I0811 23:10:45.444995   24283 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:45.445262   24283 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:45.445297   24283 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:45.445319   24283 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035969 image ls --format yaml --alsologtostderr:
- id: f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "113000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 89da1fb6dcb964dd35c3f41b7b93ffc35eaf20bc61f2e1335fea710a18424287
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "58400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-035969
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: c9abb275b1d761e6b3438e5e84ac88dca5f9f33fd4e3bed7ddd9c01ac16b1594
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-035969
size: "30"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "121000000"
- id: 6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "71100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035969 image ls --format yaml --alsologtostderr:
I0811 23:10:41.936276   24153 out.go:296] Setting OutFile to fd 1 ...
I0811 23:10:41.936437   24153 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:41.936450   24153 out.go:309] Setting ErrFile to fd 2...
I0811 23:10:41.936457   24153 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:41.936750   24153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
I0811 23:10:41.937495   24153 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:41.937638   24153 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:41.938146   24153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:41.938203   24153 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:41.952323   24153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44755
I0811 23:10:41.952819   24153 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:41.953390   24153 main.go:141] libmachine: Using API Version  1
I0811 23:10:41.953412   24153 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:41.953777   24153 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:41.953945   24153 main.go:141] libmachine: (functional-035969) Calling .GetState
I0811 23:10:41.955858   24153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:41.955904   24153 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:41.970475   24153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
I0811 23:10:41.970914   24153 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:41.971552   24153 main.go:141] libmachine: Using API Version  1
I0811 23:10:41.971595   24153 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:41.971972   24153 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:41.972158   24153 main.go:141] libmachine: (functional-035969) Calling .DriverName
I0811 23:10:41.972496   24153 ssh_runner.go:195] Run: systemctl --version
I0811 23:10:41.972529   24153 main.go:141] libmachine: (functional-035969) Calling .GetSSHHostname
I0811 23:10:41.975302   24153 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:41.975703   24153 main.go:141] libmachine: (functional-035969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:3d:35", ip: ""} in network mk-functional-035969: {Iface:virbr1 ExpiryTime:2023-08-12 00:07:26 +0000 UTC Type:0 Mac:52:54:00:8b:3d:35 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-035969 Clientid:01:52:54:00:8b:3d:35}
I0811 23:10:41.975738   24153 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined IP address 192.168.39.75 and MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:41.975883   24153 main.go:141] libmachine: (functional-035969) Calling .GetSSHPort
I0811 23:10:41.976074   24153 main.go:141] libmachine: (functional-035969) Calling .GetSSHKeyPath
I0811 23:10:41.976207   24153 main.go:141] libmachine: (functional-035969) Calling .GetSSHUsername
I0811 23:10:41.976343   24153 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/functional-035969/id_rsa Username:docker}
I0811 23:10:42.079253   24153 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0811 23:10:42.105249   24153 main.go:141] libmachine: Making call to close driver server
I0811 23:10:42.105262   24153 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:42.105552   24153 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:42.105571   24153 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:42.105586   24153 main.go:141] libmachine: Making call to close connection to plugin binary
I0811 23:10:42.105607   24153 main.go:141] libmachine: Making call to close driver server
I0811 23:10:42.105622   24153 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:42.105873   24153 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:42.105884   24153 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:42.105902   24153 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh pgrep buildkitd: exit status 1 (222.721894ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image build -t localhost/my-image:functional-035969 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image build -t localhost/my-image:functional-035969 testdata/build --alsologtostderr: (2.595533924s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035969 image build -t localhost/my-image:functional-035969 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 598d77f0e992
Removing intermediate container 598d77f0e992
---> e98d34e69394
Step 3/3 : ADD content.txt /
---> 8db5bcf29812
Successfully built 8db5bcf29812
Successfully tagged localhost/my-image:functional-035969
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035969 image build -t localhost/my-image:functional-035969 testdata/build --alsologtostderr:
I0811 23:10:42.378607   24217 out.go:296] Setting OutFile to fd 1 ...
I0811 23:10:42.378772   24217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:42.378781   24217 out.go:309] Setting ErrFile to fd 2...
I0811 23:10:42.378788   24217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:10:42.379001   24217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
I0811 23:10:42.379596   24217 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:42.380094   24217 config.go:182] Loaded profile config "functional-035969": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0811 23:10:42.380450   24217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:42.380503   24217 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:42.395028   24217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
I0811 23:10:42.395465   24217 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:42.396078   24217 main.go:141] libmachine: Using API Version  1
I0811 23:10:42.396111   24217 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:42.396592   24217 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:42.396818   24217 main.go:141] libmachine: (functional-035969) Calling .GetState
I0811 23:10:42.398977   24217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0811 23:10:42.399027   24217 main.go:141] libmachine: Launching plugin server for driver kvm2
I0811 23:10:42.413512   24217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
I0811 23:10:42.413918   24217 main.go:141] libmachine: () Calling .GetVersion
I0811 23:10:42.414439   24217 main.go:141] libmachine: Using API Version  1
I0811 23:10:42.414473   24217 main.go:141] libmachine: () Calling .SetConfigRaw
I0811 23:10:42.414763   24217 main.go:141] libmachine: () Calling .GetMachineName
I0811 23:10:42.414939   24217 main.go:141] libmachine: (functional-035969) Calling .DriverName
I0811 23:10:42.415145   24217 ssh_runner.go:195] Run: systemctl --version
I0811 23:10:42.415176   24217 main.go:141] libmachine: (functional-035969) Calling .GetSSHHostname
I0811 23:10:42.417619   24217 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:42.417997   24217 main.go:141] libmachine: (functional-035969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:3d:35", ip: ""} in network mk-functional-035969: {Iface:virbr1 ExpiryTime:2023-08-12 00:07:26 +0000 UTC Type:0 Mac:52:54:00:8b:3d:35 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-035969 Clientid:01:52:54:00:8b:3d:35}
I0811 23:10:42.418026   24217 main.go:141] libmachine: (functional-035969) DBG | domain functional-035969 has defined IP address 192.168.39.75 and MAC address 52:54:00:8b:3d:35 in network mk-functional-035969
I0811 23:10:42.418132   24217 main.go:141] libmachine: (functional-035969) Calling .GetSSHPort
I0811 23:10:42.418283   24217 main.go:141] libmachine: (functional-035969) Calling .GetSSHKeyPath
I0811 23:10:42.418417   24217 main.go:141] libmachine: (functional-035969) Calling .GetSSHUsername
I0811 23:10:42.418557   24217 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/functional-035969/id_rsa Username:docker}
I0811 23:10:42.509944   24217 build_images.go:151] Building image from path: /tmp/build.966005560.tar
I0811 23:10:42.510007   24217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0811 23:10:42.520273   24217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.966005560.tar
I0811 23:10:42.524894   24217 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.966005560.tar: stat -c "%s %y" /var/lib/minikube/build/build.966005560.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.966005560.tar': No such file or directory
I0811 23:10:42.524926   24217 ssh_runner.go:362] scp /tmp/build.966005560.tar --> /var/lib/minikube/build/build.966005560.tar (3072 bytes)
I0811 23:10:42.550143   24217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.966005560
I0811 23:10:42.558630   24217 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.966005560 -xf /var/lib/minikube/build/build.966005560.tar
I0811 23:10:42.567581   24217 docker.go:339] Building image: /var/lib/minikube/build/build.966005560
I0811 23:10:42.567658   24217 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-035969 /var/lib/minikube/build/build.966005560
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0811 23:10:44.904129   24217 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-035969 /var/lib/minikube/build/build.966005560: (2.336444791s)
I0811 23:10:44.904188   24217 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.966005560
I0811 23:10:44.916162   24217 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.966005560.tar
I0811 23:10:44.925614   24217 build_images.go:207] Built localhost/my-image:functional-035969 from /tmp/build.966005560.tar
I0811 23:10:44.925641   24217 build_images.go:123] succeeded building to: functional-035969
I0811 23:10:44.925646   24217 build_images.go:124] failed building to: 
I0811 23:10:44.925672   24217 main.go:141] libmachine: Making call to close driver server
I0811 23:10:44.925687   24217 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:44.925933   24217 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:44.925951   24217 main.go:141] libmachine: Making call to close connection to plugin binary
I0811 23:10:44.925959   24217 main.go:141] libmachine: Making call to close driver server
I0811 23:10:44.925967   24217 main.go:141] libmachine: (functional-035969) Calling .Close
I0811 23:10:44.925975   24217 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:44.926312   24217 main.go:141] libmachine: Successfully made call to close driver server
I0811 23:10:44.926317   24217 main.go:141] libmachine: (functional-035969) DBG | Closing plugin on server side
I0811 23:10:44.926339   24217 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.341096473s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-035969
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-035969 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-035969 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-nn26d" [25321e8f-6218-4506-a09f-7b3fea4df4f2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-nn26d" [25321e8f-6218-4506-a09f-7b3fea4df4f2] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.139601231s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image load --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image load --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr: (4.011560236s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image load --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image load --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr: (2.228552s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.162455694s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-035969
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image load --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image load --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr: (3.697092137s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image save gcr.io/google-containers/addon-resizer:functional-035969 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image save gcr.io/google-containers/addon-resizer:functional-035969 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.987455592s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 service list -o json
functional_test.go:1493: Took "423.595392ms" to run "out/minikube-linux-amd64 -p functional-035969 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.75:31942
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image rm gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.75:31942
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.364436587s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "223.177858ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "41.161202ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "216.843741ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "38.79941ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (28.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdany-port364229396/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1691795407974949338" to /tmp/TestFunctionalparallelMountCmdany-port364229396/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1691795407974949338" to /tmp/TestFunctionalparallelMountCmdany-port364229396/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1691795407974949338" to /tmp/TestFunctionalparallelMountCmdany-port364229396/001/test-1691795407974949338
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.854987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 11 23:10 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 11 23:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 11 23:10 test-1691795407974949338
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh cat /mount-9p/test-1691795407974949338
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-035969 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ca7da2fc-389c-450c-9afc-ec09f26a933c] Pending
helpers_test.go:344: "busybox-mount" [ca7da2fc-389c-450c-9afc-ec09f26a933c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ca7da2fc-389c-450c-9afc-ec09f26a933c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ca7da2fc-389c-450c-9afc-ec09f26a933c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.01342327s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-035969 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdany-port364229396/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (28.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-035969
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 image save --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-035969 image save --daemon gcr.io/google-containers/addon-resizer:functional-035969 --alsologtostderr: (1.919821791s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-035969
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdspecific-port4091218445/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.515522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdspecific-port4091218445/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh "sudo umount -f /mount-9p": exit status 1 (196.931077ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-035969 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdspecific-port4091218445/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1916621411/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1916621411/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1916621411/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T" /mount1: exit status 1 (303.155502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035969 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-035969 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1916621411/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1916621411/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1916621411/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-035969
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-035969
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-035969
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (272.63s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-358056 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-358056 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (55.32197576s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-358056 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0811 23:41:13.101015   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.106350   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.116644   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.137010   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.297598   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.377988   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.538390   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:13.859018   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:14.499923   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:15.780312   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:18.341243   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:41:23.462255   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-358056 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.118600455s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-358056 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-358056 addons enable gvisor: (5.346229073s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [e510c1c4-bbe8-4bcb-bf7f-514702efe966] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.025232816s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-358056 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [9ed50fc0-22ad-4efa-8507-d1e8a97a4289] Pending
helpers_test.go:344: "nginx-gvisor" [9ed50fc0-22ad-4efa-8507-d1e8a97a4289] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [9ed50fc0-22ad-4efa-8507-d1e8a97a4289] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 16.025879876s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-358056
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-358056: (1m32.289440753s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-358056 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0811 23:43:51.339633   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:43:57.065494   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-358056 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m3.992792335s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [e510c1c4-bbe8-4bcb-bf7f-514702efe966] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.027822902s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [9ed50fc0-22ad-4efa-8507-d1e8a97a4289] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.011571163s
helpers_test.go:175: Cleaning up "gvisor-358056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-358056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-358056: (1.076948127s)
--- PASS: TestGvisorAddon (272.63s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-789520 --driver=kvm2 
E0811 23:11:35.183405   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-789520 --driver=kvm2 : (50.127284035s)
--- PASS: TestImageBuild/serial/Setup (50.13s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-789520
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-789520: (1.54497309s)
--- PASS: TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-789520
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-789520: (1.235583496s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-789520
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-789520
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (81.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-581758 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-581758 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m21.326303272s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (81.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons enable ingress --alsologtostderr -v=5: (16.955904271s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-581758 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-581758 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.28599203s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-581758 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-581758 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [21aaa1ad-7484-4540-9cc7-a08640b7ac21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [21aaa1ad-7484-4540-9cc7-a08640b7ac21] Running
E0811 23:13:51.338769   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.01292984s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-581758 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-581758 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-581758 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.220
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons disable ingress-dns --alsologtostderr -v=1: (2.640617709s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-581758 addons disable ingress --alsologtostderr -v=1: (7.494488914s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (104.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-537604 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0811 23:14:19.023863   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:14:51.067351   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.072626   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.082954   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.103201   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.143481   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.223796   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.384432   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:51.705026   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:52.346084   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:53.626692   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:14:56.187530   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:15:01.308172   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:15:11.548323   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:15:32.028542   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-537604 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m44.174139301s)
--- PASS: TestJSONOutput/start/Command (104.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-537604 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-537604 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-537604 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-537604 --output=json --user=testUser: (13.093970145s)
--- PASS: TestJSONOutput/stop/Command (13.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-317858 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-317858 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.759004ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f2c689b-32dd-4bdc-bd18-edd336d72f70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-317858] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b52b657-c013-4210-ad18-919e2cf69aa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17044"}}
	{"specversion":"1.0","id":"a9c6d3cf-3681-4be8-8128-9ceeb015d0d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5a606ba5-a595-490b-8e43-3c4e3f014902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig"}}
	{"specversion":"1.0","id":"bbfc0b29-7ad4-4823-bc4a-ba065875203a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube"}}
	{"specversion":"1.0","id":"dccbfc87-bcd0-4418-8e9b-06369d024aed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3ebb282d-5a3b-481b-b9be-179b21ec3431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a992fe3e-afa7-4823-8139-1510429b44ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-317858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-317858
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (111.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-773935 --driver=kvm2 
E0811 23:16:12.989680   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-773935 --driver=kvm2 : (55.278752237s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-776725 --driver=kvm2 
E0811 23:17:34.911617   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-776725 --driver=kvm2 : (53.178583318s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-773935
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-776725
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-776725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-776725
helpers_test.go:175: Cleaning up "first-773935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-773935
--- PASS: TestMinikubeProfile (111.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-222602 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-222602 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.537795579s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-222602 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-222602 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-241871 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0811 23:18:29.912981   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:29.918263   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:29.928528   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:29.948805   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:29.989064   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:30.069402   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:30.229849   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:30.550407   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:31.191426   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:32.471655   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:35.033668   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:40.153863   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:50.394692   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:18:51.339143   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-241871 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.44422435s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-241871 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-241871 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-222602 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-241871 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-241871 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-241871
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-241871: (2.076537772s)
--- PASS: TestMountStart/serial/Stop (2.08s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-241871
E0811 23:19:10.875198   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-241871: (22.749809633s)
--- PASS: TestMountStart/serial/RestartStopped (23.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-241871 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-241871 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-618164 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0811 23:19:51.067622   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:19:51.835757   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:20:18.752716   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:21:13.756954   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-618164 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m1.490265655s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-618164 -- rollout status deployment/busybox: (3.117861772s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-dspxl -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-vrdpw -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-dspxl -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-vrdpw -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-dspxl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-vrdpw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-dspxl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-dspxl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-vrdpw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-618164 -- exec busybox-67b7f59bb-vrdpw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-618164 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-618164 -v 3 --alsologtostderr: (45.79053531s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.36s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp testdata/cp-test.txt multinode-618164:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3974164346/001/cp-test_multinode-618164.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164:/home/docker/cp-test.txt multinode-618164-m02:/home/docker/cp-test_multinode-618164_multinode-618164-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m02 "sudo cat /home/docker/cp-test_multinode-618164_multinode-618164-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164:/home/docker/cp-test.txt multinode-618164-m03:/home/docker/cp-test_multinode-618164_multinode-618164-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m03 "sudo cat /home/docker/cp-test_multinode-618164_multinode-618164-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp testdata/cp-test.txt multinode-618164-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3974164346/001/cp-test_multinode-618164-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164-m02:/home/docker/cp-test.txt multinode-618164:/home/docker/cp-test_multinode-618164-m02_multinode-618164.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164 "sudo cat /home/docker/cp-test_multinode-618164-m02_multinode-618164.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164-m02:/home/docker/cp-test.txt multinode-618164-m03:/home/docker/cp-test_multinode-618164-m02_multinode-618164-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m03 "sudo cat /home/docker/cp-test_multinode-618164-m02_multinode-618164-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp testdata/cp-test.txt multinode-618164-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3974164346/001/cp-test_multinode-618164-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164-m03:/home/docker/cp-test.txt multinode-618164:/home/docker/cp-test_multinode-618164-m03_multinode-618164.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164 "sudo cat /home/docker/cp-test_multinode-618164-m03_multinode-618164.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 cp multinode-618164-m03:/home/docker/cp-test.txt multinode-618164-m02:/home/docker/cp-test_multinode-618164-m03_multinode-618164-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 ssh -n multinode-618164-m02 "sudo cat /home/docker/cp-test_multinode-618164-m03_multinode-618164-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-618164 node stop m03: (3.081318146s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-618164 status: exit status 7 (414.338634ms)

                                                
                                                
-- stdout --
	multinode-618164
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-618164-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-618164-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr: exit status 7 (417.519227ms)

                                                
                                                
-- stdout --
	multinode-618164
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-618164-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-618164-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:22:31.636563   31693 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:22:31.636702   31693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:22:31.636712   31693 out.go:309] Setting ErrFile to fd 2...
	I0811 23:22:31.636716   31693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:22:31.636920   31693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	I0811 23:22:31.637063   31693 out.go:303] Setting JSON to false
	I0811 23:22:31.637095   31693 mustload.go:65] Loading cluster: multinode-618164
	I0811 23:22:31.637124   31693 notify.go:220] Checking for updates...
	I0811 23:22:31.637589   31693 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:22:31.637609   31693 status.go:255] checking status of multinode-618164 ...
	I0811 23:22:31.637993   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.638067   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:31.653535   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0811 23:22:31.653935   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:31.654582   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:31.654616   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:31.654938   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:31.655149   31693 main.go:141] libmachine: (multinode-618164) Calling .GetState
	I0811 23:22:31.656627   31693 status.go:330] multinode-618164 host status = "Running" (err=<nil>)
	I0811 23:22:31.656640   31693 host.go:66] Checking if "multinode-618164" exists ...
	I0811 23:22:31.656953   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.656988   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:31.671872   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0811 23:22:31.672306   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:31.672737   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:31.672759   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:31.673143   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:31.673373   31693 main.go:141] libmachine: (multinode-618164) Calling .GetIP
	I0811 23:22:31.675848   31693 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:22:31.676426   31693 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:19:42 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:22:31.676456   31693 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:22:31.676561   31693 host.go:66] Checking if "multinode-618164" exists ...
	I0811 23:22:31.676845   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.676888   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:31.691229   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0811 23:22:31.691627   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:31.692136   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:31.692157   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:31.692459   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:31.692678   31693 main.go:141] libmachine: (multinode-618164) Calling .DriverName
	I0811 23:22:31.692891   31693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:22:31.692916   31693 main.go:141] libmachine: (multinode-618164) Calling .GetSSHHostname
	I0811 23:22:31.695556   31693 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:22:31.695909   31693 main.go:141] libmachine: (multinode-618164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:97:b5", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:19:42 +0000 UTC Type:0 Mac:52:54:00:ac:97:b5 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-618164 Clientid:01:52:54:00:ac:97:b5}
	I0811 23:22:31.695932   31693 main.go:141] libmachine: (multinode-618164) DBG | domain multinode-618164 has defined IP address 192.168.39.6 and MAC address 52:54:00:ac:97:b5 in network mk-multinode-618164
	I0811 23:22:31.696059   31693 main.go:141] libmachine: (multinode-618164) Calling .GetSSHPort
	I0811 23:22:31.696328   31693 main.go:141] libmachine: (multinode-618164) Calling .GetSSHKeyPath
	I0811 23:22:31.696513   31693 main.go:141] libmachine: (multinode-618164) Calling .GetSSHUsername
	I0811 23:22:31.696667   31693 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164/id_rsa Username:docker}
	I0811 23:22:31.784328   31693 ssh_runner.go:195] Run: systemctl --version
	I0811 23:22:31.789841   31693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:22:31.803680   31693 kubeconfig.go:92] found "multinode-618164" server: "https://192.168.39.6:8443"
	I0811 23:22:31.803701   31693 api_server.go:166] Checking apiserver status ...
	I0811 23:22:31.803727   31693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:22:31.816299   31693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1908/cgroup
	I0811 23:22:31.824200   31693 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/podf0707583abef3bd312ad889b26693949/2965fda37c078e7d006734622100e8cdbc5c058917b52011da8c71afd8311350"
	I0811 23:22:31.824260   31693 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf0707583abef3bd312ad889b26693949/2965fda37c078e7d006734622100e8cdbc5c058917b52011da8c71afd8311350/freezer.state
	I0811 23:22:31.832578   31693 api_server.go:204] freezer state: "THAWED"
	I0811 23:22:31.832596   31693 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0811 23:22:31.838654   31693 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0811 23:22:31.838673   31693 status.go:421] multinode-618164 apiserver status = Running (err=<nil>)
	I0811 23:22:31.838681   31693 status.go:257] multinode-618164 status: &{Name:multinode-618164 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0811 23:22:31.838697   31693 status.go:255] checking status of multinode-618164-m02 ...
	I0811 23:22:31.838992   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.839016   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:31.853719   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39295
	I0811 23:22:31.854102   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:31.854607   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:31.854635   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:31.854942   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:31.855134   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .GetState
	I0811 23:22:31.856614   31693 status.go:330] multinode-618164-m02 host status = "Running" (err=<nil>)
	I0811 23:22:31.856627   31693 host.go:66] Checking if "multinode-618164-m02" exists ...
	I0811 23:22:31.856897   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.856920   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:31.871393   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I0811 23:22:31.871743   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:31.872259   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:31.872287   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:31.872584   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:31.872768   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .GetIP
	I0811 23:22:31.875445   31693 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:22:31.875877   31693 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:22:31.875910   31693 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:22:31.876054   31693 host.go:66] Checking if "multinode-618164-m02" exists ...
	I0811 23:22:31.876331   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.876384   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:31.890135   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0811 23:22:31.890512   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:31.890913   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:31.890933   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:31.891258   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:31.891424   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .DriverName
	I0811 23:22:31.891607   31693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:22:31.891626   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHHostname
	I0811 23:22:31.893859   31693 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:22:31.894273   31693 main.go:141] libmachine: (multinode-618164-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:12:e8", ip: ""} in network mk-multinode-618164: {Iface:virbr1 ExpiryTime:2023-08-12 00:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:12:e8 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-618164-m02 Clientid:01:52:54:00:d3:12:e8}
	I0811 23:22:31.894311   31693 main.go:141] libmachine: (multinode-618164-m02) DBG | domain multinode-618164-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:d3:12:e8 in network mk-multinode-618164
	I0811 23:22:31.894402   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHPort
	I0811 23:22:31.894576   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHKeyPath
	I0811 23:22:31.894720   31693 main.go:141] libmachine: (multinode-618164-m02) Calling .GetSSHUsername
	I0811 23:22:31.894832   31693 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17044-9593/.minikube/machines/multinode-618164-m02/id_rsa Username:docker}
	I0811 23:22:31.982291   31693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:22:31.996033   31693 status.go:257] multinode-618164-m02 status: &{Name:multinode-618164-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0811 23:22:31.996065   31693 status.go:255] checking status of multinode-618164-m03 ...
	I0811 23:22:31.996350   31693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:22:31.996373   31693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:22:32.011332   31693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0811 23:22:32.011696   31693 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:22:32.012233   31693 main.go:141] libmachine: Using API Version  1
	I0811 23:22:32.012262   31693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:22:32.012607   31693 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:22:32.012777   31693 main.go:141] libmachine: (multinode-618164-m03) Calling .GetState
	I0811 23:22:32.014383   31693 status.go:330] multinode-618164-m03 host status = "Stopped" (err=<nil>)
	I0811 23:22:32.014395   31693 status.go:343] host is not running, skipping remaining checks
	I0811 23:22:32.014408   31693 status.go:257] multinode-618164-m03 status: &{Name:multinode-618164-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.91s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-618164 node start m03 --alsologtostderr: (31.501697424s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-618164 node delete m03: (4.209086387s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-618164 stop: (26.14428812s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-618164 status: exit status 7 (73.545569ms)

                                                
                                                
-- stdout --
	multinode-618164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-618164-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr: exit status 7 (73.36942ms)

                                                
                                                
-- stdout --
	multinode-618164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-618164-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:26:14.949466   33152 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:26:14.949600   33152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:26:14.949612   33152 out.go:309] Setting ErrFile to fd 2...
	I0811 23:26:14.949623   33152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:26:14.949833   33152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-9593/.minikube/bin
	I0811 23:26:14.950011   33152 out.go:303] Setting JSON to false
	I0811 23:26:14.950043   33152 mustload.go:65] Loading cluster: multinode-618164
	I0811 23:26:14.950144   33152 notify.go:220] Checking for updates...
	I0811 23:26:14.950535   33152 config.go:182] Loaded profile config "multinode-618164": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0811 23:26:14.950555   33152 status.go:255] checking status of multinode-618164 ...
	I0811 23:26:14.950999   33152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:26:14.951073   33152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:26:14.965620   33152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0811 23:26:14.965984   33152 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:26:14.966564   33152 main.go:141] libmachine: Using API Version  1
	I0811 23:26:14.966586   33152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:26:14.966966   33152 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:26:14.967198   33152 main.go:141] libmachine: (multinode-618164) Calling .GetState
	I0811 23:26:14.968774   33152 status.go:330] multinode-618164 host status = "Stopped" (err=<nil>)
	I0811 23:26:14.968790   33152 status.go:343] host is not running, skipping remaining checks
	I0811 23:26:14.968797   33152 status.go:257] multinode-618164 status: &{Name:multinode-618164 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0811 23:26:14.968831   33152 status.go:255] checking status of multinode-618164-m02 ...
	I0811 23:26:14.969236   33152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0811 23:26:14.969270   33152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0811 23:26:14.983357   33152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I0811 23:26:14.983691   33152 main.go:141] libmachine: () Calling .GetVersion
	I0811 23:26:14.984226   33152 main.go:141] libmachine: Using API Version  1
	I0811 23:26:14.984246   33152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0811 23:26:14.984523   33152 main.go:141] libmachine: () Calling .GetMachineName
	I0811 23:26:14.984691   33152 main.go:141] libmachine: (multinode-618164-m02) Calling .GetState
	I0811 23:26:14.986014   33152 status.go:330] multinode-618164-m02 host status = "Stopped" (err=<nil>)
	I0811 23:26:14.986025   33152 status.go:343] host is not running, skipping remaining checks
	I0811 23:26:14.986029   33152 status.go:257] multinode-618164-m02 status: &{Name:multinode-618164-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-618164 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-618164 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m45.643893312s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-618164 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-618164
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-618164-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-618164-m02 --driver=kvm2 : exit status 14 (58.867502ms)

                                                
                                                
-- stdout --
	* [multinode-618164-m02] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-618164-m02' is duplicated with machine name 'multinode-618164-m02' in profile 'multinode-618164'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-618164-m03 --driver=kvm2 
E0811 23:28:29.912325   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:28:51.339023   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-618164-m03 --driver=kvm2 : (51.466876412s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-618164
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-618164: exit status 80 (207.622389ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-618164
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-618164-m03 already exists in multinode-618164-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-618164-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.53s)

                                                
                                    
x
+
TestPreload (185.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-156474 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0811 23:29:51.067657   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-156474 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m39.49061485s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-156474 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-156474 image pull gcr.io/k8s-minikube/busybox: (1.264166732s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-156474
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-156474: (13.096402447s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-156474 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0811 23:31:14.113143   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-156474 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m10.152433647s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-156474 image list
helpers_test.go:175: Cleaning up "test-preload-156474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-156474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-156474: (1.035419506s)
--- PASS: TestPreload (185.25s)

                                                
                                    
x
+
TestScheduledStopUnix (122s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-256737 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-256737 --memory=2048 --driver=kvm2 : (50.497081928s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256737 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-256737 -n scheduled-stop-256737
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256737 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256737 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256737 -n scheduled-stop-256737
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-256737
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256737 --schedule 15s
E0811 23:33:29.913109   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0811 23:33:51.339686   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-256737
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-256737: exit status 7 (60.009274ms)

                                                
                                                
-- stdout --
	scheduled-stop-256737
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256737 -n scheduled-stop-256737
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256737 -n scheduled-stop-256737: exit status 7 (58.418624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-256737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-256737
--- PASS: TestScheduledStopUnix (122.00s)

                                                
                                    
x
+
TestSkaffold (139.84s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1258449713 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-854387 --memory=2600 --driver=kvm2 
E0811 23:34:51.067913   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:34:52.958135   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-854387 --memory=2600 --driver=kvm2 : (50.549010143s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1258449713 run --minikube-profile skaffold-854387 --kube-context skaffold-854387 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1258449713 run --minikube-profile skaffold-854387 --kube-context skaffold-854387 --status-check=true --port-forward=false --interactive=false: (1m17.527564602s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6f96d949fc-lhwv4" [4ce9c56f-674c-4c9d-b9d4-fc6408499cd0] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.019851973s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6db65b9db-d97jl" [f35ac6ca-edb0-4d40-bee0-677042db1f6e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010482524s
helpers_test.go:175: Cleaning up "skaffold-854387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-854387
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-854387: (1.130246475s)
--- PASS: TestSkaffold (139.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (170.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.921717366.exe start -p running-upgrade-429823 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.921717366.exe start -p running-upgrade-429823 --memory=2200 --vm-driver=kvm2 : (1m42.611761155s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-429823 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-429823 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m5.691637871s)
helpers_test.go:175: Cleaning up "running-upgrade-429823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-429823
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-429823: (1.806305488s)
--- PASS: TestRunningBinaryUpgrade (170.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (253.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (2m13.080920337s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-903688
E0811 23:38:51.339087   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-903688: (13.360521144s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-903688 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-903688 status --format={{.Host}}: exit status 7 (67.45416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (48.871835923s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-903688 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (100.219139ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-903688] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-903688
	    minikube start -p kubernetes-upgrade-903688 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9036882 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-903688 --kubernetes-version=v1.28.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
E0811 23:39:51.066848   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-903688 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (56.869794754s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-903688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-903688
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-903688: (1.097757601s)
--- PASS: TestKubernetesUpgrade (253.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (186.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.3548620737.exe start -p stopped-upgrade-169197 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.3548620737.exe start -p stopped-upgrade-169197 --memory=2200 --vm-driver=kvm2 : (1m42.240290203s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.3548620737.exe -p stopped-upgrade-169197 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.3548620737.exe -p stopped-upgrade-169197 stop: (13.095789782s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-169197 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0811 23:38:29.912345   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-169197 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m11.434538962s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (186.77s)

                                                
                                    
x
+
TestPause/serial/Start (75.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-305435 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-305435 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m15.338772513s)
--- PASS: TestPause/serial/Start (75.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-305435 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-305435 --alsologtostderr -v=1 --driver=kvm2 : (50.707030528s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-169197
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-169197: (1.298704006s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-838892 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-838892 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (79.924592ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-838892] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-9593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-9593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (61.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-838892 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-838892 --driver=kvm2 : (1m0.904766971s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-838892 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (61.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-305435 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-305435 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-305435 --output=json --layout=cluster: exit status 2 (304.821582ms)

                                                
                                                
-- stdout --
	{"Name":"pause-305435","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-305435","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-305435 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-305435 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-305435 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-305435 --alsologtostderr -v=5: (1.282957826s)
--- PASS: TestPause/serial/DeletePaused (1.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.92s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.921123642s)
--- PASS: TestPause/serial/VerifyDeletedResources (13.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (53.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-838892 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-838892 --no-kubernetes --driver=kvm2 : (52.306614257s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-838892 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-838892 status -o json: exit status 2 (241.941865ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-838892","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-838892
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-838892: (1.187735802s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (53.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-838892 --no-kubernetes --driver=kvm2 
E0811 23:41:33.702535   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-838892 --no-kubernetes --driver=kvm2 : (46.04425646s)
--- PASS: TestNoKubernetes/serial/Start (46.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-838892 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-838892 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.142438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (110.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m50.109666189s)
--- PASS: TestNetworkPlugins/group/auto/Start (110.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m39.056115293s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (140.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0811 23:44:51.067063   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m20.386698203s)
--- PASS: TestNetworkPlugins/group/calico/Start (140.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7bg74" [0c6d6ad7-04ed-4e12-9f52-b7c5d75098a7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.037877091s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-znb8s" [e248db27-f5c5-4cfb-872b-804fa51c38a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-znb8s" [e248db27-f5c5-4cfb-872b-804fa51c38a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.017022821s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m28.695479343s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-r7g6j" [5790d453-2c59-48b0-8e39-3a4dcb6d377b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-r7g6j" [5790d453-2c59-48b0-8e39-3a4dcb6d377b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.018199348s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0811 23:46:49.999986   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m23.004327268s)
--- PASS: TestNetworkPlugins/group/false/Start (83.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0811 23:47:00.241222   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m48.741864946s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fb5m9" [4c7e3ca9-09f5-4de9-8770-0bc3aac5a883] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026621726s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jjxrn" [ffe2ab6a-d69b-48d7-93e5-7808864f7299] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0811 23:47:20.721778   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-jjxrn" [ffe2ab6a-d69b-48d7-93e5-7808864f7299] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.015948582s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m36.44328504s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-ngnrz" [246ae819-190b-4441-b734-52134aecd652] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0811 23:47:54.113520   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-ngnrz" [246ae819-190b-4441-b734-52134aecd652] Running
E0811 23:48:01.682963   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.011978013s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-4bqgs" [22e2d00d-238d-4333-9228-df46fdf2e1db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-4bqgs" [22e2d00d-238d-4333-9228-df46fdf2e1db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.013988371s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m19.236037324s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (16.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-933926 exec deployment/netcat -- nslookup kubernetes.default
E0811 23:48:29.912202   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-933926 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.203308826s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context false-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (16.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mhhgv" [9a0e5e25-0399-4493-88e5-2eb37d81c6b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0811 23:48:51.338767   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-mhhgv" [9a0e5e25-0399-4493-88e5-2eb37d81c6b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.017324304s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-933926 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m27.770450605s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (169.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-407430 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-407430 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m49.532202542s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (169.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xrsbk" [2ea646be-8732-4558-a2cf-ad1457e54b5e] Running
E0811 23:49:23.603446   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020612003s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hmhgm" [dab18502-3b4f-4c9d-8825-55ec36adc0f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hmhgm" [dab18502-3b4f-4c9d-8825-55ec36adc0f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.012914956s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vcvxb" [f64077b1-131e-4c4c-8ced-9062f95d6555] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0811 23:49:51.066822   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-vcvxb" [f64077b1-131e-4c4c-8ced-9062f95d6555] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.013073134s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-018362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-018362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.0-rc.0: (1m40.467768255s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-063031 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-063031 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.27.4: (1m39.936268672s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-933926 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-933926 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rkmpq" [688f5e9f-3ee7-4f4c-a136-5cae71d2d35f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rkmpq" [688f5e9f-3ee7-4f4c-a136-5cae71d2d35f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.011874993s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-933926 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-933926 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (85.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-169672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.0-rc.0
E0811 23:51:13.100924   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:51:18.253826   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.259141   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.269445   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.289735   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.330074   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.411217   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.571965   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:18.892425   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:19.161886   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.167201   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.177521   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.197866   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.238227   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.318960   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.479417   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:19.533482   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:19.799728   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:20.440425   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:20.814177   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:21.721546   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:23.374350   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:24.282673   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:28.495001   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:51:29.403173   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:51:32.959267   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:51:38.735880   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-169672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.0-rc.0: (1m25.855924521s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (85.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-018362 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0811 23:51:39.644266   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
helpers_test.go:344: "busybox" [474e2dfa-e0d5-4d53-bfa7-eb0939f1ae16] Pending
E0811 23:51:39.759212   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
helpers_test.go:344: "busybox" [474e2dfa-e0d5-4d53-bfa7-eb0939f1ae16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [474e2dfa-e0d5-4d53-bfa7-eb0939f1ae16] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.028401222s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-018362 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-018362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-018362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.327099207s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-018362 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-018362 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-018362 --alsologtostderr -v=3: (13.118324011s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-063031 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00be4dcb-54bb-4f4a-adc5-e2fcec458336] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0811 23:51:59.216316   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:52:00.124792   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
helpers_test.go:344: "busybox" [00be4dcb-54bb-4f4a-adc5-e2fcec458336] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.044336648s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-063031 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018362 -n no-preload-018362
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018362 -n no-preload-018362: exit status 7 (65.394446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-018362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-018362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-018362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.0-rc.0: (5m30.742257638s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018362 -n no-preload-018362
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-407430 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1289d4c9-6545-4c19-ac23-ef42f5d9a0b1] Pending
helpers_test.go:344: "busybox" [1289d4c9-6545-4c19-ac23-ef42f5d9a0b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0811 23:52:07.444166   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1289d4c9-6545-4c19-ac23-ef42f5d9a0b1] Running
E0811 23:52:09.624178   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:10.265273   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:11.545581   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:14.106331   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.035542574s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-407430 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-063031 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-063031 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.224397126s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-063031 describe deploy/metrics-server -n kube-system
E0811 23:52:08.986586   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:08.991912   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:09.002290   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:09.022603   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-063031 --alsologtostderr -v=3
E0811 23:52:09.063200   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:09.143299   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:09.303522   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-063031 --alsologtostderr -v=3: (13.121866393s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-407430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-407430 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-407430 --alsologtostderr -v=3
E0811 23:52:19.227485   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-407430 --alsologtostderr -v=3: (13.152644169s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031: exit status 7 (54.956418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-063031 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-063031 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-063031 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.27.4: (5m14.146452575s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031
E0811 23:57:36.672506   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-169672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-169672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.148929426s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-169672 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-169672 --alsologtostderr -v=3: (12.131352376s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-407430 -n old-k8s-version-407430
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-407430 -n old-k8s-version-407430: exit status 7 (71.172324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-407430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (479.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-407430 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0811 23:52:29.468646   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-407430 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m58.967468653s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-407430 -n old-k8s-version-407430
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (479.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-169672 -n newest-cni-169672
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-169672 -n newest-cni-169672: exit status 7 (57.144661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-169672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (83.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-169672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.0-rc.0
E0811 23:52:40.177260   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:52:41.085918   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:52:49.949847   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:52:51.561806   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:51.567152   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:51.577786   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:51.598629   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:51.639640   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:51.720765   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:51.881665   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:52.202387   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:52.843028   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:54.123207   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:52:56.683929   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:53:01.805007   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:53:12.045417   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:53:12.854708   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:12.860002   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:12.870234   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:12.890529   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:12.931504   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:13.012218   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:13.173323   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:13.493455   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:14.134191   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:15.414382   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:17.974541   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:23.095025   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:29.911725   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/ingress-addon-legacy-581758/client.crt: no such file or directory
E0811 23:53:30.910524   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:53:32.525820   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:53:33.335974   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:44.473340   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:44.478698   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:44.488965   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:44.509244   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:44.549554   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:44.629897   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:44.791018   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:45.111991   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:45.752327   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:47.032921   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:49.593092   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:53:51.339426   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/addons-894170/client.crt: no such file or directory
E0811 23:53:53.816349   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:53:54.713933   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:54:02.097561   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-169672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.0-rc.0: (1m23.467427323s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-169672 -n newest-cni-169672
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (83.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-169672 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-169672 --alsologtostderr -v=1
E0811 23:54:03.006394   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-169672 -n newest-cni-169672
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-169672 -n newest-cni-169672: exit status 2 (232.529307ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-169672 -n newest-cni-169672
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-169672 -n newest-cni-169672: exit status 2 (229.454961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-169672 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-169672 -n newest-cni-169672
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-169672 -n newest-cni-169672
E0811 23:54:04.954528   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-425622 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.27.4
E0811 23:54:13.486234   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:54:22.794314   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:22.799603   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:22.809875   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:22.830168   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:22.870612   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:22.950916   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:23.111415   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:23.432056   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:24.073020   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:25.354076   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:25.435544   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:54:27.914318   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:33.035406   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:34.777507   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:54:43.275566   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:54:45.360583   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:45.366167   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:45.376553   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:45.396989   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:45.438018   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:45.519175   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:45.679669   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:46.000314   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:46.640798   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:47.921320   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:50.482158   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:54:51.067736   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/functional-035969/client.crt: no such file or directory
E0811 23:54:52.831331   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:54:55.602913   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:55:03.756712   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:55:05.843814   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:55:06.396319   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-425622 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.27.4: (1m16.976349942s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-425622 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2e34e9c-6c51-44fb-a1a6-775a6be66ed0] Pending
helpers_test.go:344: "busybox" [a2e34e9c-6c51-44fb-a1a6-775a6be66ed0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0811 23:55:26.323995   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a2e34e9c-6c51-44fb-a1a6-775a6be66ed0] Running
E0811 23:55:29.759008   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:29.764283   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:29.774548   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:29.794931   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:29.835253   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:29.915630   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:30.076305   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:30.396687   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:31.037079   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.035577998s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-425622 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-425622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0811 23:55:32.317531   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-425622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.120124348s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-425622 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-425622 --alsologtostderr -v=3
E0811 23:55:34.877979   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:35.406437   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/custom-flannel-933926/client.crt: no such file or directory
E0811 23:55:39.998709   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:44.717596   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-425622 --alsologtostderr -v=3: (13.116829793s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425622 -n embed-certs-425622
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425622 -n embed-certs-425622: exit status 7 (65.192042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-425622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-425622 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.27.4
E0811 23:55:50.239165   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:55:56.698108   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/false-933926/client.crt: no such file or directory
E0811 23:56:07.284841   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
E0811 23:56:10.720286   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:56:13.101027   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
E0811 23:56:18.253789   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:56:19.162120   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:56:28.316967   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/enable-default-cni-933926/client.crt: no such file or directory
E0811 23:56:39.759482   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/gvisor-358056/client.crt: no such file or directory
E0811 23:56:45.937865   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kindnet-933926/client.crt: no such file or directory
E0811 23:56:46.847387   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
E0811 23:56:51.680748   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
E0811 23:57:06.638310   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/flannel-933926/client.crt: no such file or directory
E0811 23:57:08.986369   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/calico-933926/client.crt: no such file or directory
E0811 23:57:29.205399   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/bridge-933926/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-425622 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.27.4: (5m32.461239416s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425622 -n embed-certs-425622
E0812 00:01:19.161931   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/auto-933926/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fskxj" [94f9a328-4319-4b9b-b0fc-d40c071aec9d] Running
E0811 23:57:36.267074   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/skaffold-854387/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022356399s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-ztvdb" [88bdb01d-093f-43fd-9b11-84892a1c6c11] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018544112s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fskxj" [94f9a328-4319-4b9b-b0fc-d40c071aec9d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012231577s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-018362 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-ztvdb" [88bdb01d-093f-43fd-9b11-84892a1c6c11] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011894002s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-063031 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-018362 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-018362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018362 -n no-preload-018362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018362 -n no-preload-018362: exit status 2 (243.100115ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-018362 -n no-preload-018362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-018362 -n no-preload-018362: exit status 2 (251.135988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-018362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018362 -n no-preload-018362
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-018362 -n no-preload-018362
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-063031 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-063031 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031: exit status 2 (272.034672ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031: exit status 2 (270.758907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-063031 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063031 -n default-k8s-diff-port-063031
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-w8rsm" [176496ba-ca78-4614-8dc3-888edbafaf6a] Running
E0812 00:00:29.758721   16836 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-9593/.minikube/profiles/kubenet-933926/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020035364s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-w8rsm" [176496ba-ca78-4614-8dc3-888edbafaf6a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01276849s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-407430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-407430 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-407430 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-407430 -n old-k8s-version-407430
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-407430 -n old-k8s-version-407430: exit status 2 (249.292686ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-407430 -n old-k8s-version-407430
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-407430 -n old-k8s-version-407430: exit status 2 (254.982215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-407430 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-407430 -n old-k8s-version-407430
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-407430 -n old-k8s-version-407430
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-89dwf" [d084a79d-d098-4d67-8b81-60befe9b6db1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021804804s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-89dwf" [d084a79d-d098-4d67-8b81-60befe9b6db1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014173814s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-425622 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-425622 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-425622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425622 -n embed-certs-425622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425622 -n embed-certs-425622: exit status 2 (226.735247ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-425622 -n embed-certs-425622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-425622 -n embed-certs-425622: exit status 2 (228.376194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-425622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425622 -n embed-certs-425622
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-425622 -n embed-certs-425622
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                    

Test skip (34/320)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.27.4/cached-images 0
13 TestDownloadOnly/v1.27.4/binaries 0
14 TestDownloadOnly/v1.27.4/kubectl 0
19 TestDownloadOnly/v1.28.0-rc.0/cached-images 0
20 TestDownloadOnly/v1.28.0-rc.0/binaries 0
21 TestDownloadOnly/v1.28.0-rc.0/kubectl 0
25 TestDownloadOnlyKic 0
36 TestAddons/parallel/Olm 0
49 TestDockerEnvContainerd 0
51 TestHyperKitDriverInstallOrUpdate 0
52 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
161 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
228 TestChangeNoneUser 0
231 TestScheduledStopWindows 0
235 TestInsufficientStorage 0
239 TestMissingContainerUpgrade 0
250 TestNetworkPlugins/group/cilium 3.01
258 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-933926 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-933926" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-933926

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-933926" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933926"

                                                
                                                
----------------------- debugLogs end: cilium-933926 [took: 2.863384323s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-933926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-933926
--- SKIP: TestNetworkPlugins/group/cilium (3.01s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-145469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-145469
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard